09:19:13 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141341 09:19:13 Running as SYSTEM 09:19:13 [EnvInject] - Loading node environment variables. 09:19:13 Building remotely on prd-ubuntu1804-docker-8c-8g-22280 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 09:19:13 [ssh-agent] Looking for ssh-agent implementation... 09:19:13 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 09:19:13 $ ssh-agent 09:19:13 SSH_AUTH_SOCK=/tmp/ssh-L6F8IuiPdsYY/agent.2052 09:19:13 SSH_AGENT_PID=2054 09:19:13 [ssh-agent] Started. 09:19:13 Running ssh-add (command line suppressed) 09:19:13 Identity added: /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_16717763560632786762.key (/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_16717763560632786762.key) 09:19:13 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 09:19:13 The recommended git tool is: NONE 09:19:15 using credential onap-jenkins-ssh 09:19:15 Wiping out workspace first. 09:19:15 Cloning the remote Git repository 09:19:15 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 09:19:15 > git init /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp # timeout=10 09:19:15 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 09:19:15 > git --version # timeout=10 09:19:15 > git --version # 'git version 2.17.1' 09:19:15 using GIT_SSH to set credentials Gerrit user 09:19:15 Verifying host key using manually-configured host key entries 09:19:15 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 09:19:16 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 09:19:16 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 09:19:16 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 09:19:16 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 09:19:16 using GIT_SSH to set credentials Gerrit user 09:19:16 Verifying host key using manually-configured host key entries 09:19:16 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/41/141341/1 # timeout=30 09:19:16 > git rev-parse 59019a04744343798d7ed958303e89d28d6a4524^{commit} # timeout=10 09:19:16 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 09:19:16 Checking out Revision 59019a04744343798d7ed958303e89d28d6a4524 (refs/changes/41/141341/1) 09:19:16 > git config core.sparsecheckout # timeout=10 09:19:16 > git checkout -f 59019a04744343798d7ed958303e89d28d6a4524 # timeout=30 09:19:20 Commit message: "Add missing delete composition in CSIT" 09:19:20 > git rev-parse FETCH_HEAD^{commit} # timeout=10 09:19:20 > git rev-list --no-walk ed38a50541249063daf2cfb00b312fb173adeace # timeout=10 09:19:20 provisioning config files... 09:19:20 copy managed file [npmrc] to file:/home/jenkins/.npmrc 09:19:20 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 09:19:20 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins9152182574220324453.sh 09:19:20 ---> python-tools-install.sh 09:19:20 Setup pyenv: 09:19:20 * system (set by /opt/pyenv/version) 09:19:20 * 3.8.13 (set by /opt/pyenv/version) 09:19:20 * 3.9.13 (set by /opt/pyenv/version) 09:19:21 * 3.10.6 (set by /opt/pyenv/version) 09:19:24 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-fyfu 09:19:24 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 09:19:28 lf-activate-venv(): INFO: Installing: lftools 09:19:52 lf-activate-venv(): INFO: Adding /tmp/venv-fyfu/bin to PATH 09:19:52 Generating Requirements File 09:20:12 Python 3.10.6 09:20:12 pip 25.1.1 from /tmp/venv-fyfu/lib/python3.10/site-packages/pip (python 3.10) 09:20:12 appdirs==1.4.4 09:20:12 argcomplete==3.6.2 09:20:12 aspy.yaml==1.3.0 09:20:12 attrs==25.3.0 09:20:12 autopage==0.5.2 09:20:12 beautifulsoup4==4.13.4 09:20:12 boto3==1.38.39 09:20:12 botocore==1.38.39 09:20:12 bs4==0.0.2 09:20:12 cachetools==5.5.2 09:20:12 certifi==2025.6.15 09:20:12 cffi==1.17.1 09:20:12 cfgv==3.4.0 09:20:12 chardet==5.2.0 09:20:12 charset-normalizer==3.4.2 09:20:12 click==8.2.1 09:20:12 cliff==4.10.0 09:20:12 cmd2==2.6.1 09:20:12 cryptography==3.3.2 09:20:12 debtcollector==3.0.0 09:20:12 decorator==5.2.1 09:20:12 defusedxml==0.7.1 09:20:12 Deprecated==1.2.18 09:20:12 distlib==0.3.9 09:20:12 dnspython==2.7.0 09:20:12 docker==7.1.0 09:20:12 dogpile.cache==1.4.0 09:20:12 durationpy==0.10 09:20:12 email_validator==2.2.0 09:20:12 filelock==3.18.0 09:20:12 future==1.0.0 09:20:12 gitdb==4.0.12 09:20:12 GitPython==3.1.44 09:20:12 google-auth==2.40.3 09:20:12 httplib2==0.22.0 09:20:12 identify==2.6.12 09:20:12 idna==3.10 09:20:12 importlib-resources==1.5.0 09:20:12 iso8601==2.1.0 09:20:12 Jinja2==3.1.6 09:20:12 jmespath==1.0.1 09:20:12 jsonpatch==1.33 09:20:12 jsonpointer==3.0.0 09:20:12 jsonschema==4.24.0 09:20:12 jsonschema-specifications==2025.4.1 09:20:12 keystoneauth1==5.11.1 09:20:12 kubernetes==33.1.0 09:20:12 lftools==0.37.13 09:20:12 lxml==5.4.0 09:20:12 MarkupSafe==3.0.2 09:20:12 msgpack==1.1.1 09:20:12 multi_key_dict==2.0.3 09:20:12 munch==4.0.0 09:20:12 netaddr==1.3.0 09:20:12 niet==1.4.2 09:20:12 nodeenv==1.9.1 09:20:12 oauth2client==4.1.3 09:20:12 oauthlib==3.3.0 09:20:12 openstacksdk==4.6.0 09:20:12 os-client-config==2.1.0 09:20:12 os-service-types==1.7.0 09:20:12 osc-lib==4.0.2 09:20:12 oslo.config==9.8.0 09:20:12 oslo.context==6.0.0 09:20:12 oslo.i18n==6.5.1 09:20:12 oslo.log==7.1.0 09:20:12 oslo.serialization==5.7.0 09:20:12 oslo.utils==9.0.0 09:20:12 packaging==25.0 09:20:12 pbr==6.1.1 09:20:12 platformdirs==4.3.8 09:20:12 prettytable==3.16.0 09:20:12 psutil==7.0.0 09:20:12 pyasn1==0.6.1 09:20:12 pyasn1_modules==0.4.2 09:20:12 pycparser==2.22 09:20:12 pygerrit2==2.0.15 09:20:12 PyGithub==2.6.1 09:20:12 PyJWT==2.10.1 09:20:12 PyNaCl==1.5.0 09:20:12 pyparsing==2.4.7 09:20:12 pyperclip==1.9.0 09:20:12 pyrsistent==0.20.0 09:20:12 python-cinderclient==9.7.0 09:20:12 python-dateutil==2.9.0.post0 09:20:12 python-heatclient==4.2.0 09:20:12 python-jenkins==1.8.2 09:20:12 python-keystoneclient==5.6.0 09:20:12 python-magnumclient==4.8.1 09:20:12 python-openstackclient==8.1.0 09:20:12 python-swiftclient==4.8.0 09:20:12 PyYAML==6.0.2 09:20:12 referencing==0.36.2 09:20:12 requests==2.32.4 09:20:12 requests-oauthlib==2.0.0 09:20:12 requestsexceptions==1.4.0 09:20:12 rfc3986==2.0.0 09:20:12 rpds-py==0.25.1 09:20:12 rsa==4.9.1 09:20:12 ruamel.yaml==0.18.14 09:20:12 ruamel.yaml.clib==0.2.12 09:20:12 s3transfer==0.13.0 09:20:12 simplejson==3.20.1 09:20:12 six==1.17.0 09:20:12 smmap==5.0.2 09:20:12 soupsieve==2.7 09:20:12 stevedore==5.4.1 09:20:12 tabulate==0.9.0 09:20:12 toml==0.10.2 09:20:12 tomlkit==0.13.3 09:20:12 tqdm==4.67.1 09:20:12 typing_extensions==4.14.0 09:20:12 tzdata==2025.2 09:20:12 urllib3==1.26.20 09:20:12 virtualenv==20.31.2 09:20:12 wcwidth==0.2.13 09:20:12 websocket-client==1.8.0 09:20:12 wrapt==1.17.2 09:20:12 xdg==6.0.0 09:20:12 xmltodict==0.14.2 09:20:12 yq==3.4.3 09:20:12 [EnvInject] - Injecting environment variables from a build step. 09:20:12 [EnvInject] - Injecting as environment variables the properties content 09:20:12 SET_JDK_VERSION=openjdk17 09:20:12 GIT_URL="git://cloud.onap.org/mirror" 09:20:12 09:20:12 [EnvInject] - Variables injected successfully. 09:20:12 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh /tmp/jenkins6262530646787347020.sh 09:20:12 ---> update-java-alternatives.sh 09:20:12 ---> Updating Java version 09:20:12 ---> Ubuntu/Debian system detected 09:20:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 09:20:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 09:20:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 09:20:13 openjdk version "17.0.4" 2022-07-19 09:20:13 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 09:20:13 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 09:20:13 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 09:20:13 [EnvInject] - Injecting environment variables from a build step. 09:20:13 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 09:20:13 [EnvInject] - Variables injected successfully. 09:20:13 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh -xe /tmp/jenkins10205303522544514315.sh 09:20:13 + /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/run-project-csit.sh opa-pdp 09:20:13 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 09:20:13 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 09:20:13 Configure a credential helper to remove this warning. See 09:20:13 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 09:20:13 09:20:13 Login Succeeded 09:20:13 docker: 'compose' is not a docker command. 09:20:13 See 'docker --help' 09:20:13 Docker Compose Plugin not installed. Installing now... 09:20:13 % Total % Received % Xferd Average Speed Time Time Time Current 09:20:13 Dload Upload Total Spent Left Speed 09:20:14 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 09:20:14 71 60.2M 71 42.7M 0 0 55.4M 0 0:00:01 --:--:-- 0:00:01 55.4M 100 60.2M 100 60.2M 0 0 63.3M 0 --:--:-- --:--:-- --:--:-- 97.4M 09:20:14 Setting project configuration for: opa-pdp 09:20:14 Configuring docker compose... 09:20:16 Starting opa-pdp using postgres + Grafana/Prometheus 09:20:16 opa-pdp Pulling 09:20:16 pap Pulling 09:20:16 prometheus Pulling 09:20:16 zookeeper Pulling 09:20:16 kafka Pulling 09:20:16 policy-db-migrator Pulling 09:20:16 postgres Pulling 09:20:16 api Pulling 09:20:16 grafana Pulling 09:20:17 f90c8eb4724c Pulling fs layer 09:20:17 2b1b549e99de Pulling fs layer 09:20:17 547372ea8ffa Pulling fs layer 09:20:17 65d25c0f02f3 Pulling fs layer 09:20:17 90dd78f85976 Pulling fs layer 09:20:17 4f4fb700ef54 Pulling fs layer 09:20:17 65d25c0f02f3 Waiting 09:20:17 4f4fb700ef54 Waiting 09:20:17 90dd78f85976 Waiting 09:20:17 da9db072f522 Pulling fs layer 09:20:17 96e38c8865ba Pulling fs layer 09:20:17 e5d7009d9e55 Pulling fs layer 09:20:17 1ec5fb03eaee Pulling fs layer 09:20:17 d3165a332ae3 Pulling fs layer 09:20:17 c124ba1a8b26 Pulling fs layer 09:20:17 6394804c2196 Pulling fs layer 09:20:17 da9db072f522 Waiting 09:20:17 96e38c8865ba Waiting 09:20:17 1ec5fb03eaee Waiting 09:20:17 e5d7009d9e55 Waiting 09:20:17 c124ba1a8b26 Waiting 09:20:17 6394804c2196 Waiting 09:20:17 d3165a332ae3 Waiting 09:20:17 da9db072f522 Pulling fs layer 09:20:17 110a13bd01fb Pulling fs layer 09:20:17 12cf1ed9c784 Pulling fs layer 09:20:17 d4108afce2f7 Pulling fs layer 09:20:17 07255172bfd8 Pulling fs layer 09:20:17 22c948928e79 Pulling fs layer 09:20:17 e92d65bf8445 Pulling fs layer 09:20:17 7910fddefabc Pulling fs layer 09:20:17 12cf1ed9c784 Waiting 09:20:17 d4108afce2f7 Waiting 09:20:17 da9db072f522 Waiting 09:20:17 07255172bfd8 Waiting 09:20:17 110a13bd01fb Waiting 09:20:17 22c948928e79 Waiting 09:20:17 7910fddefabc Waiting 09:20:17 e92d65bf8445 Waiting 09:20:17 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 09:20:17 547372ea8ffa Downloading [> ] 130kB/12.63MB 09:20:17 f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 09:20:17 da9db072f522 Pulling fs layer 09:20:17 96e38c8865ba Pulling fs layer 09:20:17 5e06c6bed798 Pulling fs layer 09:20:17 684be6598fc9 Pulling fs layer 09:20:17 0d92cad902ba Pulling fs layer 09:20:17 dcc0c3b2850c Pulling fs layer 09:20:17 eb7cda286a15 Pulling fs layer 09:20:17 da9db072f522 Waiting 09:20:17 96e38c8865ba Waiting 09:20:17 5e06c6bed798 Waiting 09:20:17 684be6598fc9 Waiting 09:20:17 eb7cda286a15 Waiting 09:20:17 0d92cad902ba Waiting 09:20:17 2d429b9e73a6 Pulling fs layer 09:20:17 46eab5b44a35 Pulling fs layer 09:20:17 c4d302cc468d Pulling fs layer 09:20:17 01e0882c90d9 Pulling fs layer 09:20:17 531ee2cf3c0c Pulling fs layer 09:20:17 2d429b9e73a6 Waiting 09:20:17 ed54a7dee1d8 Pulling fs layer 09:20:17 12c5c803443f Pulling fs layer 09:20:17 e27c75a98748 Pulling fs layer 09:20:17 e73cb4a42719 Pulling fs layer 09:20:17 46eab5b44a35 Waiting 09:20:17 c4d302cc468d Waiting 09:20:17 01e0882c90d9 Waiting 09:20:17 a83b68436f09 Pulling fs layer 09:20:17 787d6bee9571 Pulling fs layer 09:20:17 531ee2cf3c0c Waiting 09:20:17 13ff0988aaea Pulling fs layer 09:20:17 4b82842ab819 Pulling fs layer 09:20:17 7e568a0dc8fb Pulling fs layer 09:20:17 e27c75a98748 Waiting 09:20:17 ed54a7dee1d8 Waiting 09:20:17 e73cb4a42719 Waiting 09:20:17 12c5c803443f Waiting 09:20:17 13ff0988aaea Waiting 09:20:17 7e568a0dc8fb Waiting 09:20:17 4b82842ab819 Waiting 09:20:17 a83b68436f09 Waiting 09:20:17 787d6bee9571 Waiting 09:20:17 9fa9226be034 Pulling fs layer 09:20:17 1617e25568b2 Pulling fs layer 09:20:17 6ac0e4adf315 Pulling fs layer 09:20:17 f3b09c502777 Pulling fs layer 09:20:17 408012a7b118 Pulling fs layer 09:20:17 44986281b8b9 Pulling fs layer 09:20:17 bf70c5107ab5 Pulling fs layer 09:20:17 1ccde423731d Pulling fs layer 09:20:17 7221d93db8a9 Pulling fs layer 09:20:17 7df673c7455d Pulling fs layer 09:20:17 6ac0e4adf315 Waiting 09:20:17 44986281b8b9 Waiting 09:20:17 408012a7b118 Waiting 09:20:17 bf70c5107ab5 Waiting 09:20:17 f3b09c502777 Waiting 09:20:17 7df673c7455d Waiting 09:20:17 1ccde423731d Waiting 09:20:17 7221d93db8a9 Waiting 09:20:17 9fa9226be034 Waiting 09:20:17 eca0188f477e Pulling fs layer 09:20:17 e444bcd4d577 Pulling fs layer 09:20:17 eca0188f477e Waiting 09:20:17 eabd8714fec9 Pulling fs layer 09:20:17 45fd2fec8a19 Pulling fs layer 09:20:17 e444bcd4d577 Waiting 09:20:17 8f10199ed94b Pulling fs layer 09:20:17 f963a77d2726 Pulling fs layer 09:20:17 f3a82e9f1761 Pulling fs layer 09:20:17 eabd8714fec9 Waiting 09:20:17 79161a3f5362 Pulling fs layer 09:20:17 9c266ba63f51 Pulling fs layer 09:20:17 2e8a7df9c2ee Pulling fs layer 09:20:17 10f05dd8b1db Pulling fs layer 09:20:17 41dac8b43ba6 Pulling fs layer 09:20:17 71a9f6a9ab4d Pulling fs layer 09:20:17 da3ed5db7103 Pulling fs layer 09:20:17 c955f6e31a04 Pulling fs layer 09:20:17 8f10199ed94b Waiting 09:20:17 45fd2fec8a19 Waiting 09:20:17 9c266ba63f51 Waiting 09:20:17 2e8a7df9c2ee Waiting 09:20:17 f3a82e9f1761 Waiting 09:20:17 79161a3f5362 Waiting 09:20:17 10f05dd8b1db Waiting 09:20:17 71a9f6a9ab4d Waiting 09:20:17 da3ed5db7103 Waiting 09:20:17 41dac8b43ba6 Waiting 09:20:17 c955f6e31a04 Waiting 09:20:17 f18232174bc9 Pulling fs layer 09:20:17 9183b65e90ee Pulling fs layer 09:20:17 3f8d5c908dcc Pulling fs layer 09:20:17 30bb92ff0608 Pulling fs layer 09:20:17 807a2e881ecd Pulling fs layer 09:20:17 4a4d0948b0bf Pulling fs layer 09:20:17 f18232174bc9 Waiting 09:20:17 9183b65e90ee Waiting 09:20:17 04f6155c873d Pulling fs layer 09:20:17 3f8d5c908dcc Waiting 09:20:17 85dde7dceb0a Pulling fs layer 09:20:17 7009d5001b77 Pulling fs layer 09:20:17 538deb30e80c Pulling fs layer 09:20:17 807a2e881ecd Waiting 09:20:17 4a4d0948b0bf Waiting 09:20:17 85dde7dceb0a Waiting 09:20:17 04f6155c873d Waiting 09:20:17 7009d5001b77 Waiting 09:20:17 538deb30e80c Waiting 09:20:17 2b1b549e99de Verifying Checksum 09:20:17 2b1b549e99de Download complete 09:20:17 1e017ebebdbd Pulling fs layer 09:20:17 55f2b468da67 Pulling fs layer 09:20:17 1e017ebebdbd Waiting 09:20:17 82bfc142787e Pulling fs layer 09:20:17 46baca71a4ef Pulling fs layer 09:20:17 b0e0ef7895f4 Pulling fs layer 09:20:17 c0c90eeb8aca Pulling fs layer 09:20:17 5cfb27c10ea5 Pulling fs layer 09:20:17 82bfc142787e Waiting 09:20:17 c0c90eeb8aca Waiting 09:20:17 46baca71a4ef Waiting 09:20:17 b0e0ef7895f4 Waiting 09:20:17 40a5eed61bb0 Pulling fs layer 09:20:17 5cfb27c10ea5 Waiting 09:20:17 e040ea11fa10 Pulling fs layer 09:20:17 40a5eed61bb0 Waiting 09:20:17 e040ea11fa10 Waiting 09:20:17 09d5a3f70313 Pulling fs layer 09:20:17 356f5c2c843b Pulling fs layer 09:20:17 09d5a3f70313 Waiting 09:20:17 356f5c2c843b Waiting 09:20:17 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 09:20:17 547372ea8ffa Downloading [===============================> ] 7.863MB/12.63MB 09:20:17 f90c8eb4724c Downloading [=============> ] 8.093MB/30.59MB 09:20:17 547372ea8ffa Verifying Checksum 09:20:17 547372ea8ffa Download complete 09:20:17 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 09:20:17 65d25c0f02f3 Downloading [=============> ] 7.667MB/28.98MB 09:20:17 f90c8eb4724c Downloading [===================================> ] 21.79MB/30.59MB 09:20:17 f90c8eb4724c Verifying Checksum 09:20:17 f90c8eb4724c Download complete 09:20:17 4f4fb700ef54 Downloading [==================================================>] 32B/32B 09:20:17 4f4fb700ef54 Verifying Checksum 09:20:17 4f4fb700ef54 Download complete 09:20:17 90dd78f85976 Downloading [=======> ] 6.389MB/41.49MB 09:20:17 65d25c0f02f3 Downloading [================================> ] 18.58MB/28.98MB 09:20:17 da9db072f522 Downloading [> ] 48.06kB/3.624MB 09:20:17 da9db072f522 Downloading [> ] 48.06kB/3.624MB 09:20:17 da9db072f522 Downloading [> ] 48.06kB/3.624MB 09:20:17 65d25c0f02f3 Verifying Checksum 09:20:17 65d25c0f02f3 Download complete 09:20:17 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 09:20:17 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 09:20:17 da9db072f522 Verifying Checksum 09:20:17 da9db072f522 Verifying Checksum 09:20:17 da9db072f522 Download complete 09:20:17 da9db072f522 Download complete 09:20:17 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 09:20:17 da9db072f522 Verifying Checksum 09:20:17 da9db072f522 Download complete 09:20:17 da9db072f522 Extracting [> ] 65.54kB/3.624MB 09:20:17 da9db072f522 Extracting [> ] 65.54kB/3.624MB 09:20:17 da9db072f522 Extracting [> ] 65.54kB/3.624MB 09:20:17 f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 09:20:17 e5d7009d9e55 Downloading [==================================================>] 295B/295B 09:20:17 e5d7009d9e55 Verifying Checksum 09:20:17 e5d7009d9e55 Download complete 09:20:17 90dd78f85976 Downloading [======================> ] 18.32MB/41.49MB 09:20:17 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 09:20:17 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 09:20:17 1ec5fb03eaee Verifying Checksum 09:20:17 1ec5fb03eaee Download complete 09:20:17 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 09:20:17 d3165a332ae3 Verifying Checksum 09:20:17 d3165a332ae3 Download complete 09:20:17 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 09:20:17 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 09:20:17 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 09:20:17 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 09:20:17 f90c8eb4724c Extracting [======> ] 4.26MB/30.59MB 09:20:17 90dd78f85976 Downloading [========================================> ] 33.23MB/41.49MB 09:20:17 90dd78f85976 Verifying Checksum 09:20:17 90dd78f85976 Download complete 09:20:17 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 09:20:17 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 09:20:17 c124ba1a8b26 Downloading [=====> ] 10.81MB/91.87MB 09:20:17 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 09:20:17 6394804c2196 Download complete 09:20:17 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 09:20:17 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 09:20:17 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 09:20:17 f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 09:20:17 110a13bd01fb Downloading [> ] 539.6kB/71.86MB 09:20:17 96e38c8865ba Downloading [===> ] 4.865MB/71.91MB 09:20:17 96e38c8865ba Downloading [===> ] 4.865MB/71.91MB 09:20:17 da9db072f522 Pull complete 09:20:17 da9db072f522 Pull complete 09:20:17 da9db072f522 Pull complete 09:20:17 c124ba1a8b26 Downloading [=============> ] 24.33MB/91.87MB 09:20:17 f90c8eb4724c Extracting [==================> ] 11.47MB/30.59MB 09:20:17 110a13bd01fb Downloading [=> ] 2.702MB/71.86MB 09:20:17 96e38c8865ba Downloading [======> ] 9.731MB/71.91MB 09:20:17 96e38c8865ba Downloading [======> ] 9.731MB/71.91MB 09:20:17 c124ba1a8b26 Downloading [======================> ] 41.09MB/91.87MB 09:20:17 f90c8eb4724c Extracting [=============================> ] 18.02MB/30.59MB 09:20:17 110a13bd01fb Downloading [=====> ] 8.109MB/71.86MB 09:20:17 96e38c8865ba Downloading [==============> ] 21.09MB/71.91MB 09:20:17 96e38c8865ba Downloading [==============> ] 21.09MB/71.91MB 09:20:17 c124ba1a8b26 Downloading [================================> ] 59.47MB/91.87MB 09:20:18 f90c8eb4724c Extracting [======================================> ] 23.59MB/30.59MB 09:20:18 110a13bd01fb Downloading [=============> ] 20MB/71.86MB 09:20:18 96e38c8865ba Downloading [=========================> ] 36.76MB/71.91MB 09:20:18 96e38c8865ba Downloading [=========================> ] 36.76MB/71.91MB 09:20:18 c124ba1a8b26 Downloading [========================================> ] 75.15MB/91.87MB 09:20:18 f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 09:20:18 110a13bd01fb Downloading [========================> ] 34.6MB/71.86MB 09:20:18 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB 09:20:18 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB 09:20:18 c124ba1a8b26 Downloading [=================================================> ] 90.29MB/91.87MB 09:20:18 c124ba1a8b26 Verifying Checksum 09:20:18 c124ba1a8b26 Download complete 09:20:18 12cf1ed9c784 Downloading [> ] 146.4kB/14.64MB 09:20:18 110a13bd01fb Downloading [===============================> ] 44.87MB/71.86MB 09:20:18 f90c8eb4724c Extracting [================================================> ] 29.49MB/30.59MB 09:20:18 96e38c8865ba Downloading [===============================================> ] 68.66MB/71.91MB 09:20:18 96e38c8865ba Downloading [===============================================> ] 68.66MB/71.91MB 09:20:18 96e38c8865ba Verifying Checksum 09:20:18 96e38c8865ba Download complete 09:20:18 96e38c8865ba Download complete 09:20:18 d4108afce2f7 Downloading [==================================================>] 1.073kB/1.073kB 09:20:18 d4108afce2f7 Verifying Checksum 09:20:18 d4108afce2f7 Download complete 09:20:18 12cf1ed9c784 Downloading [==================> ] 5.307MB/14.64MB 09:20:18 110a13bd01fb Downloading [=========================================> ] 58.93MB/71.86MB 09:20:18 07255172bfd8 Downloading [============================> ] 3.003kB/5.24kB 09:20:18 07255172bfd8 Downloading [==================================================>] 5.24kB/5.24kB 09:20:18 07255172bfd8 Verifying Checksum 09:20:18 07255172bfd8 Download complete 09:20:18 f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 09:20:18 22c948928e79 Downloading [==================================================>] 1.031kB/1.031kB 09:20:18 22c948928e79 Verifying Checksum 09:20:18 22c948928e79 Download complete 09:20:18 e92d65bf8445 Download complete 09:20:18 7910fddefabc Downloading [=======> ] 3.002kB/19.51kB 09:20:18 7910fddefabc Download complete 09:20:18 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 09:20:18 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 09:20:18 5e06c6bed798 Downloading [==================================================>] 296B/296B 09:20:18 5e06c6bed798 Verifying Checksum 09:20:18 5e06c6bed798 Download complete 09:20:18 110a13bd01fb Verifying Checksum 09:20:18 110a13bd01fb Download complete 09:20:18 12cf1ed9c784 Downloading [===============================================> ] 14.01MB/14.64MB 09:20:18 12cf1ed9c784 Verifying Checksum 09:20:18 12cf1ed9c784 Download complete 09:20:18 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 09:20:18 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 09:20:18 0d92cad902ba Verifying Checksum 09:20:18 0d92cad902ba Download complete 09:20:18 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 09:20:18 684be6598fc9 Verifying Checksum 09:20:18 684be6598fc9 Download complete 09:20:18 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 09:20:18 eb7cda286a15 Download complete 09:20:18 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 09:20:18 f90c8eb4724c Pull complete 09:20:18 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 09:20:18 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 09:20:18 46eab5b44a35 Verifying Checksum 09:20:18 46eab5b44a35 Download complete 09:20:18 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 09:20:18 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 09:20:18 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 09:20:18 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 09:20:18 110a13bd01fb Extracting [> ] 557.1kB/71.86MB 09:20:18 dcc0c3b2850c Downloading [=======> ] 11.35MB/76.12MB 09:20:18 c4d302cc468d Verifying Checksum 09:20:18 c4d302cc468d Download complete 09:20:18 2b1b549e99de Extracting [===========> ] 589.8kB/2.646MB 09:20:18 2d429b9e73a6 Downloading [================> ] 9.731MB/29.13MB 09:20:18 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 09:20:18 01e0882c90d9 Verifying Checksum 09:20:18 01e0882c90d9 Download complete 09:20:18 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 09:20:18 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 09:20:18 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 09:20:18 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 09:20:18 dcc0c3b2850c Downloading [================> ] 25.41MB/76.12MB 09:20:18 110a13bd01fb Extracting [===> ] 4.456MB/71.86MB 09:20:18 2d429b9e73a6 Downloading [=====================================> ] 21.82MB/29.13MB 09:20:18 2b1b549e99de Pull complete 09:20:18 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 09:20:18 2d429b9e73a6 Verifying Checksum 09:20:18 2d429b9e73a6 Download complete 09:20:18 96e38c8865ba Extracting [========> ] 12.81MB/71.91MB 09:20:18 96e38c8865ba Extracting [========> ] 12.81MB/71.91MB 09:20:18 531ee2cf3c0c Downloading [===========> ] 1.883MB/8.066MB 09:20:18 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 09:20:18 dcc0c3b2850c Downloading [============================> ] 42.71MB/76.12MB 09:20:18 110a13bd01fb Extracting [=====> ] 8.356MB/71.86MB 09:20:18 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB 09:20:18 ed54a7dee1d8 Verifying Checksum 09:20:18 ed54a7dee1d8 Download complete 09:20:18 12c5c803443f Downloading [==================================================>] 116B/116B 09:20:18 12c5c803443f Verifying Checksum 09:20:18 12c5c803443f Download complete 09:20:18 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 09:20:18 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 09:20:18 e27c75a98748 Verifying Checksum 09:20:18 e27c75a98748 Download complete 09:20:18 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 09:20:18 531ee2cf3c0c Downloading [================================================> ] 7.781MB/8.066MB 09:20:18 531ee2cf3c0c Verifying Checksum 09:20:18 531ee2cf3c0c Download complete 09:20:18 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 09:20:18 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 09:20:18 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 09:20:18 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 09:20:18 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 09:20:18 a83b68436f09 Verifying Checksum 09:20:18 a83b68436f09 Download complete 09:20:18 dcc0c3b2850c Downloading [====================================> ] 56.23MB/76.12MB 09:20:18 787d6bee9571 Downloading [==================================================>] 127B/127B 09:20:18 787d6bee9571 Verifying Checksum 09:20:18 787d6bee9571 Download complete 09:20:18 13ff0988aaea Downloading [==================================================>] 167B/167B 09:20:18 13ff0988aaea Download complete 09:20:18 110a13bd01fb Extracting [========> ] 12.26MB/71.86MB 09:20:18 547372ea8ffa Extracting [============> ] 3.277MB/12.63MB 09:20:18 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 09:20:18 4b82842ab819 Download complete 09:20:18 7e568a0dc8fb Downloading [==================================================>] 184B/184B 09:20:18 7e568a0dc8fb Download complete 09:20:18 9fa9226be034 Downloading [> ] 15.3kB/783kB 09:20:18 2d429b9e73a6 Extracting [======> ] 3.834MB/29.13MB 09:20:18 e73cb4a42719 Downloading [====> ] 10.27MB/109.1MB 09:20:18 9fa9226be034 Downloading [==================================================>] 783kB/783kB 09:20:18 9fa9226be034 Verifying Checksum 09:20:18 9fa9226be034 Download complete 09:20:18 9fa9226be034 Extracting [==> ] 32.77kB/783kB 09:20:18 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 09:20:18 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 09:20:18 dcc0c3b2850c Downloading [=============================================> ] 69.75MB/76.12MB 09:20:18 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 09:20:18 1617e25568b2 Download complete 09:20:19 547372ea8ffa Extracting [==========================> ] 6.685MB/12.63MB 09:20:19 dcc0c3b2850c Verifying Checksum 09:20:19 dcc0c3b2850c Download complete 09:20:19 110a13bd01fb Extracting [===========> ] 16.71MB/71.86MB 09:20:19 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 09:20:19 e73cb4a42719 Downloading [=========> ] 21.63MB/109.1MB 09:20:19 2d429b9e73a6 Extracting [============> ] 7.078MB/29.13MB 09:20:19 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 09:20:19 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 09:20:19 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 09:20:19 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 09:20:19 9fa9226be034 Extracting [==================================================>] 783kB/783kB 09:20:19 9fa9226be034 Extracting [==================================================>] 783kB/783kB 09:20:19 547372ea8ffa Extracting [==================================> ] 8.782MB/12.63MB 09:20:19 110a13bd01fb Extracting [==============> ] 21.17MB/71.86MB 09:20:19 6ac0e4adf315 Downloading [===> ] 4.324MB/62.07MB 09:20:19 e73cb4a42719 Downloading [================> ] 35.68MB/109.1MB 09:20:19 2d429b9e73a6 Extracting [===============> ] 8.847MB/29.13MB 09:20:19 f3b09c502777 Downloading [==> ] 2.702MB/56.52MB 09:20:19 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 09:20:19 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 09:20:19 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 09:20:19 9fa9226be034 Pull complete 09:20:19 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 09:20:19 110a13bd01fb Extracting [=================> ] 25.07MB/71.86MB 09:20:19 6ac0e4adf315 Downloading [=======> ] 9.19MB/62.07MB 09:20:19 547372ea8ffa Pull complete 09:20:19 e73cb4a42719 Downloading [=======================> ] 51.36MB/109.1MB 09:20:19 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB 09:20:19 f3b09c502777 Downloading [=====> ] 6.487MB/56.52MB 09:20:19 96e38c8865ba Extracting [======================> ] 32.31MB/71.91MB 09:20:19 96e38c8865ba Extracting [======================> ] 32.31MB/71.91MB 09:20:19 6ac0e4adf315 Downloading [================> ] 21.09MB/62.07MB 09:20:19 110a13bd01fb Extracting [===================> ] 28.41MB/71.86MB 09:20:19 e73cb4a42719 Downloading [=============================> ] 65.42MB/109.1MB 09:20:19 2d429b9e73a6 Extracting [=======================> ] 13.86MB/29.13MB 09:20:19 f3b09c502777 Downloading [============> ] 14.06MB/56.52MB 09:20:19 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 09:20:19 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 09:20:19 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 09:20:19 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 09:20:19 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09:20:19 6ac0e4adf315 Downloading [==========================> ] 32.44MB/62.07MB 09:20:19 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09:20:19 110a13bd01fb Extracting [=====================> ] 31.2MB/71.86MB 09:20:19 e73cb4a42719 Downloading [===================================> ] 77.32MB/109.1MB 09:20:19 2d429b9e73a6 Extracting [============================> ] 16.52MB/29.13MB 09:20:19 f3b09c502777 Downloading [======================> ] 24.87MB/56.52MB 09:20:19 65d25c0f02f3 Extracting [======> ] 3.539MB/28.98MB 09:20:19 96e38c8865ba Extracting [===========================> ] 39.55MB/71.91MB 09:20:19 96e38c8865ba Extracting [===========================> ] 39.55MB/71.91MB 09:20:19 6ac0e4adf315 Downloading [=====================================> ] 47.04MB/62.07MB 09:20:19 110a13bd01fb Extracting [=======================> ] 33.98MB/71.86MB 09:20:19 e73cb4a42719 Downloading [==========================================> ] 92.45MB/109.1MB 09:20:19 f3b09c502777 Downloading [==================================> ] 38.93MB/56.52MB 09:20:19 2d429b9e73a6 Extracting [=================================> ] 19.46MB/29.13MB 09:20:19 1617e25568b2 Pull complete 09:20:19 65d25c0f02f3 Extracting [============> ] 7.373MB/28.98MB 09:20:19 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 09:20:19 6ac0e4adf315 Downloading [=================================================> ] 61.64MB/62.07MB 09:20:19 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 09:20:19 6ac0e4adf315 Verifying Checksum 09:20:19 6ac0e4adf315 Download complete 09:20:19 e73cb4a42719 Downloading [=================================================> ] 107.6MB/109.1MB 09:20:19 110a13bd01fb Extracting [=========================> ] 37.32MB/71.86MB 09:20:19 e73cb4a42719 Verifying Checksum 09:20:19 e73cb4a42719 Download complete 09:20:19 f3b09c502777 Downloading [===========================================> ] 48.66MB/56.52MB 09:20:19 408012a7b118 Downloading [==================================================>] 637B/637B 09:20:19 408012a7b118 Verifying Checksum 09:20:19 408012a7b118 Download complete 09:20:19 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 09:20:19 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 09:20:19 44986281b8b9 Verifying Checksum 09:20:19 44986281b8b9 Download complete 09:20:19 2d429b9e73a6 Extracting [======================================> ] 22.41MB/29.13MB 09:20:19 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 09:20:19 bf70c5107ab5 Verifying Checksum 09:20:19 bf70c5107ab5 Download complete 09:20:19 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 09:20:19 1ccde423731d Download complete 09:20:19 7221d93db8a9 Downloading [==================================================>] 100B/100B 09:20:19 7221d93db8a9 Verifying Checksum 09:20:19 7221d93db8a9 Download complete 09:20:19 7df673c7455d Downloading [==================================================>] 694B/694B 09:20:19 7df673c7455d Verifying Checksum 09:20:19 7df673c7455d Download complete 09:20:19 65d25c0f02f3 Extracting [==================> ] 10.91MB/28.98MB 09:20:19 eca0188f477e Downloading [> ] 375.7kB/37.17MB 09:20:19 e444bcd4d577 Downloading [==================================================>] 279B/279B 09:20:19 e444bcd4d577 Verifying Checksum 09:20:19 e444bcd4d577 Download complete 09:20:19 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 09:20:19 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 09:20:19 f3b09c502777 Verifying Checksum 09:20:19 f3b09c502777 Download complete 09:20:19 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 09:20:19 45fd2fec8a19 Verifying Checksum 09:20:19 45fd2fec8a19 Download complete 09:20:19 eabd8714fec9 Downloading [> ] 539.6kB/375MB 09:20:19 110a13bd01fb Extracting [===========================> ] 40.11MB/71.86MB 09:20:19 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 09:20:19 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 09:20:19 2d429b9e73a6 Extracting [==========================================> ] 24.48MB/29.13MB 09:20:19 65d25c0f02f3 Extracting [=======================> ] 13.57MB/28.98MB 09:20:19 eca0188f477e Downloading [==========> ] 7.912MB/37.17MB 09:20:19 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 09:20:19 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 09:20:19 eabd8714fec9 Downloading [=> ] 8.109MB/375MB 09:20:19 8f10199ed94b Downloading [==============================> ] 5.406MB/8.768MB 09:20:19 110a13bd01fb Extracting [==============================> ] 44.01MB/71.86MB 09:20:19 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB 09:20:19 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 09:20:19 8f10199ed94b Verifying Checksum 09:20:19 8f10199ed94b Download complete 09:20:19 65d25c0f02f3 Extracting [============================> ] 16.52MB/28.98MB 09:20:19 eca0188f477e Downloading [=========================> ] 18.84MB/37.17MB 09:20:19 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 09:20:19 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 09:20:19 f963a77d2726 Verifying Checksum 09:20:19 f963a77d2726 Download complete 09:20:19 eabd8714fec9 Downloading [==> ] 17.3MB/375MB 09:20:19 110a13bd01fb Extracting [===============================> ] 45.68MB/71.86MB 09:20:19 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB 09:20:19 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 09:20:20 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 09:20:20 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 09:20:20 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 09:20:20 eca0188f477e Downloading [=============================================> ] 33.54MB/37.17MB 09:20:20 65d25c0f02f3 Extracting [================================> ] 18.58MB/28.98MB 09:20:20 eabd8714fec9 Downloading [===> ] 27.57MB/375MB 09:20:20 f3a82e9f1761 Downloading [======> ] 5.504MB/44.41MB 09:20:20 eca0188f477e Verifying Checksum 09:20:20 eca0188f477e Download complete 09:20:20 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 09:20:20 110a13bd01fb Extracting [==================================> ] 49.02MB/71.86MB 09:20:20 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 09:20:20 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 09:20:20 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 09:20:20 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 09:20:20 79161a3f5362 Verifying Checksum 09:20:20 79161a3f5362 Download complete 09:20:20 65d25c0f02f3 Extracting [==========================================> ] 24.77MB/28.98MB 09:20:20 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 09:20:20 9c266ba63f51 Verifying Checksum 09:20:20 9c266ba63f51 Download complete 09:20:20 eabd8714fec9 Downloading [====> ] 36.22MB/375MB 09:20:20 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 09:20:20 f3a82e9f1761 Downloading [=============> ] 11.93MB/44.41MB 09:20:20 2e8a7df9c2ee Verifying Checksum 09:20:20 2e8a7df9c2ee Download complete 09:20:20 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 09:20:20 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 09:20:20 10f05dd8b1db Downloading [==================================================>] 98B/98B 09:20:20 10f05dd8b1db Verifying Checksum 09:20:20 10f05dd8b1db Download complete 09:20:20 41dac8b43ba6 Downloading [==================================================>] 171B/171B 09:20:20 41dac8b43ba6 Verifying Checksum 09:20:20 41dac8b43ba6 Download complete 09:20:20 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 09:20:20 96e38c8865ba Extracting [=======================================> ] 57.38MB/71.91MB 09:20:20 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 09:20:20 110a13bd01fb Extracting [====================================> ] 51.81MB/71.86MB 09:20:20 65d25c0f02f3 Pull complete 09:20:20 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 09:20:20 eca0188f477e Extracting [> ] 393.2kB/37.17MB 09:20:20 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 09:20:20 71a9f6a9ab4d Verifying Checksum 09:20:20 71a9f6a9ab4d Download complete 09:20:20 eabd8714fec9 Downloading [======> ] 50.28MB/375MB 09:20:20 f3a82e9f1761 Downloading [=====================> ] 18.81MB/44.41MB 09:20:20 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB 09:20:20 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 09:20:20 96e38c8865ba Extracting [=========================================> ] 59.05MB/71.91MB 09:20:20 96e38c8865ba Extracting [=========================================> ] 59.05MB/71.91MB 09:20:20 110a13bd01fb Extracting [======================================> ] 55.15MB/71.86MB 09:20:20 eabd8714fec9 Downloading [========> ] 60.55MB/375MB 09:20:20 f3a82e9f1761 Downloading [==============================> ] 27.07MB/44.41MB 09:20:20 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 09:20:20 90dd78f85976 Extracting [> ] 426kB/41.49MB 09:20:20 eca0188f477e Extracting [====> ] 3.146MB/37.17MB 09:20:20 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB 09:20:20 2d429b9e73a6 Pull complete 09:20:20 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 09:20:20 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 09:20:20 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 09:20:20 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 09:20:20 110a13bd01fb Extracting [========================================> ] 58.49MB/71.86MB 09:20:20 eabd8714fec9 Downloading [=========> ] 70.83MB/375MB 09:20:20 f3a82e9f1761 Downloading [=======================================> ] 34.86MB/44.41MB 09:20:20 90dd78f85976 Extracting [=====> ] 4.686MB/41.49MB 09:20:20 eca0188f477e Extracting [=======> ] 5.898MB/37.17MB 09:20:20 6ac0e4adf315 Extracting [================> ] 20.05MB/62.07MB 09:20:20 da3ed5db7103 Downloading [> ] 1.621MB/127.4MB 09:20:20 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 09:20:20 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 09:20:20 110a13bd01fb Extracting [===========================================> ] 61.83MB/71.86MB 09:20:20 eabd8714fec9 Downloading [===========> ] 82.72MB/375MB 09:20:20 f3a82e9f1761 Downloading [================================================> ] 43.12MB/44.41MB 09:20:20 90dd78f85976 Extracting [========> ] 7.242MB/41.49MB 09:20:20 46eab5b44a35 Pull complete 09:20:20 f3a82e9f1761 Verifying Checksum 09:20:20 f3a82e9f1761 Download complete 09:20:20 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 09:20:20 eca0188f477e Extracting [===========> ] 8.651MB/37.17MB 09:20:20 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 09:20:20 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 09:20:20 da3ed5db7103 Downloading [=> ] 2.702MB/127.4MB 09:20:20 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 09:20:20 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 09:20:20 96e38c8865ba Extracting [================================================> ] 69.63MB/71.91MB 09:20:20 96e38c8865ba Extracting [================================================> ] 69.63MB/71.91MB 09:20:20 110a13bd01fb Extracting [=============================================> ] 65.73MB/71.86MB 09:20:20 eabd8714fec9 Downloading [============> ] 96.78MB/375MB 09:20:20 90dd78f85976 Extracting [===========> ] 9.372MB/41.49MB 09:20:20 eca0188f477e Extracting [===============> ] 11.8MB/37.17MB 09:20:20 da3ed5db7103 Downloading [=> ] 4.865MB/127.4MB 09:20:20 6ac0e4adf315 Extracting [=====================> ] 26.18MB/62.07MB 09:20:20 f18232174bc9 Downloading [===========> ] 834.5kB/3.642MB 09:20:20 c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 09:20:20 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 09:20:20 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 09:20:20 110a13bd01fb Extracting [================================================> ] 69.63MB/71.86MB 09:20:20 eabd8714fec9 Downloading [==============> ] 110.8MB/375MB 09:20:20 eca0188f477e Extracting [====================> ] 15.34MB/37.17MB 09:20:20 90dd78f85976 Extracting [=============> ] 11.08MB/41.49MB 09:20:20 da3ed5db7103 Downloading [==> ] 7.028MB/127.4MB 09:20:20 c4d302cc468d Extracting [=====================================> ] 3.408MB/4.534MB 09:20:20 f18232174bc9 Downloading [====================> ] 1.473MB/3.642MB 09:20:20 6ac0e4adf315 Extracting [=======================> ] 28.97MB/62.07MB 09:20:20 96e38c8865ba Pull complete 09:20:20 96e38c8865ba Pull complete 09:20:20 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 09:20:20 5e06c6bed798 Extracting [==================================================>] 296B/296B 09:20:20 e5d7009d9e55 Extracting [==================================================>] 295B/295B 09:20:20 e5d7009d9e55 Extracting [==================================================>] 295B/295B 09:20:20 eabd8714fec9 Downloading [================> ] 121.7MB/375MB 09:20:20 5e06c6bed798 Extracting [==================================================>] 296B/296B 09:20:20 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 09:20:20 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 09:20:20 90dd78f85976 Extracting [================> ] 13.63MB/41.49MB 09:20:20 eca0188f477e Extracting [========================> ] 18.48MB/37.17MB 09:20:20 f18232174bc9 Downloading [===============================> ] 2.26MB/3.642MB 09:20:20 110a13bd01fb Pull complete 09:20:20 c4d302cc468d Pull complete 09:20:21 da3ed5db7103 Downloading [===> ] 9.19MB/127.4MB 09:20:21 eabd8714fec9 Downloading [=================> ] 133.5MB/375MB 09:20:21 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 09:20:21 6ac0e4adf315 Extracting [=========================> ] 31.75MB/62.07MB 09:20:21 12cf1ed9c784 Extracting [> ] 163.8kB/14.64MB 09:20:21 eca0188f477e Extracting [============================> ] 21.23MB/37.17MB 09:20:21 90dd78f85976 Extracting [====================> ] 17.04MB/41.49MB 09:20:21 e5d7009d9e55 Pull complete 09:20:21 f18232174bc9 Downloading [============================================> ] 3.243MB/3.642MB 09:20:21 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 09:20:21 eabd8714fec9 Downloading [===================> ] 146.5MB/375MB 09:20:21 5e06c6bed798 Pull complete 09:20:21 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 09:20:21 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 09:20:21 6ac0e4adf315 Extracting [================================> ] 40.11MB/62.07MB 09:20:21 da3ed5db7103 Downloading [====> ] 11.89MB/127.4MB 09:20:21 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 09:20:21 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 09:20:21 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 09:20:21 f18232174bc9 Verifying Checksum 09:20:21 f18232174bc9 Download complete 09:20:21 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 09:20:21 12cf1ed9c784 Extracting [=> ] 327.7kB/14.64MB 09:20:21 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 09:20:21 9183b65e90ee Downloading [==================================================>] 141B/141B 09:20:21 9183b65e90ee Verifying Checksum 09:20:21 9183b65e90ee Download complete 09:20:21 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB 09:20:21 eca0188f477e Extracting [================================> ] 24.38MB/37.17MB 09:20:21 90dd78f85976 Extracting [=======================> ] 19.6MB/41.49MB 09:20:21 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 09:20:21 eabd8714fec9 Downloading [=====================> ] 159MB/375MB 09:20:21 da3ed5db7103 Downloading [======> ] 17.84MB/127.4MB 09:20:21 6ac0e4adf315 Extracting [=======================================> ] 49.02MB/62.07MB 09:20:21 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 09:20:21 3f8d5c908dcc Verifying Checksum 09:20:21 3f8d5c908dcc Download complete 09:20:21 12cf1ed9c784 Extracting [===========> ] 3.441MB/14.64MB 09:20:21 01e0882c90d9 Pull complete 09:20:21 684be6598fc9 Pull complete 09:20:21 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 09:20:21 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 09:20:21 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 09:20:21 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 09:20:21 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 09:20:21 eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB 09:20:21 90dd78f85976 Extracting [================================> ] 26.84MB/41.49MB 09:20:21 1ec5fb03eaee Pull complete 09:20:21 eabd8714fec9 Downloading [======================> ] 172.5MB/375MB 09:20:21 6ac0e4adf315 Extracting [==============================================> ] 57.38MB/62.07MB 09:20:21 da3ed5db7103 Downloading [=========> ] 25.41MB/127.4MB 09:20:21 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 09:20:21 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 09:20:21 30bb92ff0608 Downloading [==============> ] 2.457MB/8.735MB 09:20:21 f18232174bc9 Extracting [==============================> ] 2.228MB/3.642MB 09:20:21 12cf1ed9c784 Extracting [=================> ] 5.079MB/14.64MB 09:20:21 0d92cad902ba Pull complete 09:20:21 eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB 09:20:21 90dd78f85976 Extracting [====================================> ] 30.67MB/41.49MB 09:20:21 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 09:20:21 eabd8714fec9 Downloading [========================> ] 186.5MB/375MB 09:20:21 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 09:20:21 da3ed5db7103 Downloading [=============> ] 33.52MB/127.4MB 09:20:21 30bb92ff0608 Downloading [==============================> ] 5.406MB/8.735MB 09:20:21 12cf1ed9c784 Extracting [=====================> ] 6.39MB/14.64MB 09:20:21 f18232174bc9 Pull complete 09:20:21 9183b65e90ee Extracting [==================================================>] 141B/141B 09:20:21 9183b65e90ee Extracting [==================================================>] 141B/141B 09:20:21 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 09:20:21 eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB 09:20:21 90dd78f85976 Extracting [=======================================> ] 32.8MB/41.49MB 09:20:21 531ee2cf3c0c Extracting [====================> ] 3.244MB/8.066MB 09:20:21 d3165a332ae3 Pull complete 09:20:21 da3ed5db7103 Downloading [=================> ] 43.79MB/127.4MB 09:20:21 eabd8714fec9 Downloading [==========================> ] 199.5MB/375MB 09:20:21 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 09:20:21 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 09:20:21 30bb92ff0608 Verifying Checksum 09:20:21 30bb92ff0608 Download complete 09:20:21 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 09:20:21 807a2e881ecd Downloading [==================================================>] 58.07kB/58.07kB 09:20:21 807a2e881ecd Verifying Checksum 09:20:21 807a2e881ecd Download complete 09:20:21 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 09:20:21 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 09:20:21 4a4d0948b0bf Verifying Checksum 09:20:21 12cf1ed9c784 Extracting [===========================> ] 8.028MB/14.64MB 09:20:21 4a4d0948b0bf Download complete 09:20:21 6ac0e4adf315 Pull complete 09:20:21 531ee2cf3c0c Extracting [============================> ] 4.522MB/8.066MB 09:20:21 04f6155c873d Downloading [> ] 539.6kB/107.3MB 09:20:21 da3ed5db7103 Downloading [=====================> ] 54.07MB/127.4MB 09:20:21 90dd78f85976 Extracting [==========================================> ] 35.36MB/41.49MB 09:20:21 eabd8714fec9 Downloading [===========================> ] 207.6MB/375MB 09:20:21 eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 09:20:21 dcc0c3b2850c Extracting [=====> ] 7.799MB/76.12MB 09:20:21 9183b65e90ee Pull complete 09:20:21 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 09:20:21 12cf1ed9c784 Extracting [=============================> ] 8.52MB/14.64MB 09:20:21 04f6155c873d Downloading [==> ] 5.406MB/107.3MB 09:20:21 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 09:20:21 da3ed5db7103 Downloading [==========================> ] 68.12MB/127.4MB 09:20:21 531ee2cf3c0c Extracting [====================================> ] 5.898MB/8.066MB 09:20:21 90dd78f85976 Extracting [============================================> ] 37.06MB/41.49MB 09:20:21 eabd8714fec9 Downloading [=============================> ] 220.6MB/375MB 09:20:21 dcc0c3b2850c Extracting [========> ] 12.81MB/76.12MB 09:20:21 eca0188f477e Extracting [================================================> ] 35.78MB/37.17MB 09:20:21 12cf1ed9c784 Extracting [==================================> ] 10.16MB/14.64MB 09:20:21 04f6155c873d Downloading [======> ] 13.52MB/107.3MB 09:20:21 da3ed5db7103 Downloading [==============================> ] 78.94MB/127.4MB 09:20:21 531ee2cf3c0c Extracting [=============================================> ] 7.373MB/8.066MB 09:20:21 c124ba1a8b26 Extracting [===> ] 6.685MB/91.87MB 09:20:21 90dd78f85976 Extracting [==============================================> ] 38.34MB/41.49MB 09:20:21 eabd8714fec9 Downloading [===============================> ] 236.8MB/375MB 09:20:21 3f8d5c908dcc Extracting [====> ] 327.7kB/3.524MB 09:20:21 dcc0c3b2850c Extracting [============> ] 19.5MB/76.12MB 09:20:21 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 09:20:21 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 09:20:21 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 09:20:21 12cf1ed9c784 Extracting [======================================> ] 11.3MB/14.64MB 09:20:21 04f6155c873d Downloading [===========> ] 23.79MB/107.3MB 09:20:21 da3ed5db7103 Downloading [==================================> ] 87.05MB/127.4MB 09:20:21 c124ba1a8b26 Extracting [======> ] 11.7MB/91.87MB 09:20:21 3f8d5c908dcc Extracting [==============================> ] 2.163MB/3.524MB 09:20:21 dcc0c3b2850c Extracting [================> ] 24.51MB/76.12MB 09:20:21 eabd8714fec9 Downloading [=================================> ] 249.8MB/375MB 09:20:21 90dd78f85976 Extracting [=================================================> ] 41.32MB/41.49MB 09:20:21 531ee2cf3c0c Pull complete 09:20:22 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 09:20:22 f3b09c502777 Extracting [==> ] 2.785MB/56.52MB 09:20:22 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 09:20:22 eca0188f477e Pull complete 09:20:22 e444bcd4d577 Extracting [==================================================>] 279B/279B 09:20:22 12cf1ed9c784 Extracting [=========================================> ] 12.29MB/14.64MB 09:20:22 04f6155c873d Downloading [===============> ] 32.44MB/107.3MB 09:20:22 e444bcd4d577 Extracting [==================================================>] 279B/279B 09:20:22 da3ed5db7103 Downloading [======================================> ] 98.94MB/127.4MB 09:20:22 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 09:20:22 c124ba1a8b26 Extracting [=======> ] 14.48MB/91.87MB 09:20:22 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 09:20:22 eabd8714fec9 Downloading [==================================> ] 261.1MB/375MB 09:20:22 dcc0c3b2850c Extracting [===================> ] 28.97MB/76.12MB 09:20:22 12cf1ed9c784 Extracting [==================================================>] 14.64MB/14.64MB 09:20:22 90dd78f85976 Pull complete 09:20:22 4f4fb700ef54 Extracting [==================================================>] 32B/32B 09:20:22 4f4fb700ef54 Extracting [==================================================>] 32B/32B 09:20:22 f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 09:20:22 04f6155c873d Downloading [=====================> ] 45.42MB/107.3MB 09:20:22 da3ed5db7103 Downloading [===========================================> ] 111.4MB/127.4MB 09:20:22 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 09:20:22 12cf1ed9c784 Pull complete 09:20:22 c124ba1a8b26 Extracting [==========> ] 20.05MB/91.87MB 09:20:22 d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 09:20:22 d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 09:20:22 eabd8714fec9 Downloading [====================================> ] 273.6MB/375MB 09:20:22 dcc0c3b2850c Extracting [=======================> ] 36.21MB/76.12MB 09:20:22 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 09:20:22 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 09:20:22 3f8d5c908dcc Pull complete 09:20:22 04f6155c873d Downloading [==========================> ] 56.77MB/107.3MB 09:20:22 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 09:20:22 4f4fb700ef54 Pull complete 09:20:22 f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 09:20:22 da3ed5db7103 Downloading [================================================> ] 123.8MB/127.4MB 09:20:22 opa-pdp Pulled 09:20:22 c124ba1a8b26 Extracting [===============> ] 27.85MB/91.87MB 09:20:22 ed54a7dee1d8 Pull complete 09:20:22 e444bcd4d577 Pull complete 09:20:22 12c5c803443f Extracting [==================================================>] 116B/116B 09:20:22 12c5c803443f Extracting [==================================================>] 116B/116B 09:20:22 eabd8714fec9 Downloading [=====================================> ] 282.2MB/375MB 09:20:22 da3ed5db7103 Verifying Checksum 09:20:22 da3ed5db7103 Download complete 09:20:22 dcc0c3b2850c Extracting [===========================> ] 42.34MB/76.12MB 09:20:22 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 09:20:22 d4108afce2f7 Pull complete 09:20:22 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 09:20:22 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 09:20:22 04f6155c873d Downloading [================================> ] 68.66MB/107.3MB 09:20:22 f3b09c502777 Extracting [========> ] 9.47MB/56.52MB 09:20:22 30bb92ff0608 Extracting [==> ] 393.2kB/8.735MB 09:20:22 c124ba1a8b26 Extracting [====================> ] 37.88MB/91.87MB 09:20:22 eabd8714fec9 Downloading [=======================================> ] 294.7MB/375MB 09:20:22 dcc0c3b2850c Extracting [================================> ] 49.02MB/76.12MB 09:20:22 12c5c803443f Pull complete 09:20:22 85dde7dceb0a Downloading [=====> ] 7.028MB/63.48MB 09:20:22 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 09:20:22 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 09:20:22 04f6155c873d Downloading [======================================> ] 82.72MB/107.3MB 09:20:22 f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 09:20:22 30bb92ff0608 Extracting [=====================> ] 3.736MB/8.735MB 09:20:22 07255172bfd8 Pull complete 09:20:22 c124ba1a8b26 Extracting [========================> ] 44.56MB/91.87MB 09:20:22 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 09:20:22 eabd8714fec9 Downloading [========================================> ] 305.5MB/375MB 09:20:22 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 09:20:22 dcc0c3b2850c Extracting [====================================> ] 56.26MB/76.12MB 09:20:22 85dde7dceb0a Downloading [==============> ] 17.84MB/63.48MB 09:20:22 04f6155c873d Downloading [===========================================> ] 93.54MB/107.3MB 09:20:22 f3b09c502777 Extracting [============> ] 14.48MB/56.52MB 09:20:22 30bb92ff0608 Extracting [==================================> ] 5.997MB/8.735MB 09:20:22 c124ba1a8b26 Extracting [============================> ] 52.36MB/91.87MB 09:20:22 e27c75a98748 Pull complete 09:20:22 eabd8714fec9 Downloading [==========================================> ] 317.9MB/375MB 09:20:22 dcc0c3b2850c Extracting [===========================================> ] 65.73MB/76.12MB 09:20:22 85dde7dceb0a Downloading [=======================> ] 29.74MB/63.48MB 09:20:22 04f6155c873d Downloading [=================================================> ] 105.4MB/107.3MB 09:20:22 22c948928e79 Pull complete 09:20:22 e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 09:20:22 04f6155c873d Verifying Checksum 09:20:22 04f6155c873d Download complete 09:20:22 30bb92ff0608 Extracting [================================================> ] 8.454MB/8.735MB 09:20:22 e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 09:20:22 f3b09c502777 Extracting [===============> ] 17.27MB/56.52MB 09:20:22 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 09:20:22 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 09:20:22 7009d5001b77 Download complete 09:20:22 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB 09:20:22 eabd8714fec9 Downloading [===========================================> ] 327.6MB/375MB 09:20:22 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 09:20:22 538deb30e80c Verifying Checksum 09:20:22 538deb30e80c Download complete 09:20:22 c124ba1a8b26 Extracting [==================================> ] 63.5MB/91.87MB 09:20:22 dcc0c3b2850c Extracting [================================================> ] 73.53MB/76.12MB 09:20:22 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 09:20:22 30bb92ff0608 Pull complete 09:20:22 85dde7dceb0a Downloading [=================================> ] 42.17MB/63.48MB 09:20:22 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 09:20:22 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 09:20:22 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 09:20:22 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 09:20:22 f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 09:20:22 eabd8714fec9 Downloading [=============================================> ] 339MB/375MB 09:20:22 dcc0c3b2850c Pull complete 09:20:22 c124ba1a8b26 Extracting [======================================> ] 70.75MB/91.87MB 09:20:22 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 09:20:22 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 09:20:22 e92d65bf8445 Pull complete 09:20:22 1e017ebebdbd Downloading [===========> ] 8.666MB/37.19MB 09:20:22 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 09:20:22 85dde7dceb0a Downloading [===========================================> ] 55.69MB/63.48MB 09:20:22 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 09:20:22 85dde7dceb0a Verifying Checksum 09:20:22 85dde7dceb0a Download complete 09:20:22 e73cb4a42719 Extracting [=> ] 3.899MB/109.1MB 09:20:22 f3b09c502777 Extracting [==================> ] 21.17MB/56.52MB 09:20:22 eabd8714fec9 Downloading [==============================================> ] 352MB/375MB 09:20:22 807a2e881ecd Pull complete 09:20:22 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 09:20:22 c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB 09:20:22 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 09:20:22 1e017ebebdbd Downloading [=======================> ] 17.33MB/37.19MB 09:20:22 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 09:20:22 eb7cda286a15 Pull complete 09:20:22 eabd8714fec9 Downloading [================================================> ] 367.1MB/375MB 09:20:22 f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB 09:20:22 e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB 09:20:23 c124ba1a8b26 Extracting [=============================================> ] 84.12MB/91.87MB 09:20:23 55f2b468da67 Downloading [=> ] 6.487MB/257.9MB 09:20:23 1e017ebebdbd Downloading [======================================> ] 28.64MB/37.19MB 09:20:23 eabd8714fec9 Verifying Checksum 09:20:23 eabd8714fec9 Download complete 09:20:23 f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 09:20:23 e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB 09:20:23 c124ba1a8b26 Extracting [=================================================> ] 90.24MB/91.87MB 09:20:23 1e017ebebdbd Downloading [=============================================> ] 33.91MB/37.19MB 09:20:23 82bfc142787e Downloading [> ] 97.22kB/8.613MB 09:20:23 55f2b468da67 Downloading [=> ] 10.27MB/257.9MB 09:20:23 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 09:20:23 1e017ebebdbd Verifying Checksum 09:20:23 1e017ebebdbd Download complete 09:20:23 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 09:20:23 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 09:20:23 46baca71a4ef Verifying Checksum 09:20:23 46baca71a4ef Download complete 09:20:23 f3b09c502777 Extracting [==============================> ] 34.54MB/56.52MB 09:20:23 eabd8714fec9 Extracting [> ] 557.1kB/375MB 09:20:23 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 09:20:23 82bfc142787e Downloading [=============> ] 2.358MB/8.613MB 09:20:23 55f2b468da67 Downloading [====> ] 22.17MB/257.9MB 09:20:23 e73cb4a42719 Extracting [======> ] 13.37MB/109.1MB 09:20:23 api Pulled 09:20:23 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 09:20:23 eabd8714fec9 Extracting [=> ] 7.799MB/375MB 09:20:23 f3b09c502777 Extracting [=====================================> ] 42.89MB/56.52MB 09:20:23 7910fddefabc Pull complete 09:20:23 82bfc142787e Downloading [==============================> ] 5.307MB/8.613MB 09:20:23 4a4d0948b0bf Pull complete 09:20:23 55f2b468da67 Downloading [======> ] 31.36MB/257.9MB 09:20:23 b0e0ef7895f4 Downloading [===> ] 2.637MB/37.01MB 09:20:23 e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB 09:20:23 c124ba1a8b26 Pull complete 09:20:23 1e017ebebdbd Extracting [====> ] 3.146MB/37.19MB 09:20:23 eabd8714fec9 Extracting [=> ] 13.93MB/375MB 09:20:23 f3b09c502777 Extracting [=========================================> ] 47.35MB/56.52MB 09:20:23 82bfc142787e Verifying Checksum 09:20:23 82bfc142787e Download complete 09:20:23 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 09:20:23 c0c90eeb8aca Verifying Checksum 09:20:23 c0c90eeb8aca Download complete 09:20:23 55f2b468da67 Downloading [========> ] 42.71MB/257.9MB 09:20:23 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 09:20:23 5cfb27c10ea5 Verifying Checksum 09:20:23 5cfb27c10ea5 Download complete 09:20:23 b0e0ef7895f4 Downloading [========> ] 6.028MB/37.01MB 09:20:23 e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 09:20:23 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 09:20:23 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 09:20:23 40a5eed61bb0 Downloading [==================================================>] 98B/98B 09:20:23 40a5eed61bb0 Verifying Checksum 09:20:23 40a5eed61bb0 Download complete 09:20:23 1e017ebebdbd Extracting [======> ] 5.112MB/37.19MB 09:20:23 f3b09c502777 Extracting [================================================> ] 54.59MB/56.52MB 09:20:23 policy-db-migrator Pulled 09:20:23 e040ea11fa10 Downloading [==================================================>] 173B/173B 09:20:23 e040ea11fa10 Verifying Checksum 09:20:23 e040ea11fa10 Download complete 09:20:23 eabd8714fec9 Extracting [==> ] 20.05MB/375MB 09:20:23 04f6155c873d Extracting [> ] 557.1kB/107.3MB 09:20:23 55f2b468da67 Downloading [===========> ] 57.85MB/257.9MB 09:20:23 e73cb4a42719 Extracting [=========> ] 21.73MB/109.1MB 09:20:23 b0e0ef7895f4 Downloading [============> ] 9.42MB/37.01MB 09:20:23 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 09:20:23 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 09:20:23 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 09:20:23 eabd8714fec9 Extracting [===> ] 22.84MB/375MB 09:20:23 04f6155c873d Extracting [=> ] 2.228MB/107.3MB 09:20:23 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 09:20:23 55f2b468da67 Downloading [=============> ] 68.12MB/257.9MB 09:20:23 b0e0ef7895f4 Downloading [================> ] 12.06MB/37.01MB 09:20:23 6394804c2196 Pull complete 09:20:23 e73cb4a42719 Extracting [===========> ] 24.51MB/109.1MB 09:20:23 pap Pulled 09:20:23 f3b09c502777 Pull complete 09:20:23 408012a7b118 Extracting [==================================================>] 637B/637B 09:20:23 408012a7b118 Extracting [==================================================>] 637B/637B 09:20:23 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 09:20:23 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 09:20:23 55f2b468da67 Downloading [==============> ] 76.23MB/257.9MB 09:20:23 04f6155c873d Extracting [=> ] 3.899MB/107.3MB 09:20:23 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 09:20:23 b0e0ef7895f4 Downloading [=====================> ] 15.83MB/37.01MB 09:20:23 e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 09:20:23 1e017ebebdbd Extracting [=================> ] 12.98MB/37.19MB 09:20:23 55f2b468da67 Downloading [=================> ] 90.83MB/257.9MB 09:20:23 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 09:20:23 04f6155c873d Extracting [==> ] 6.128MB/107.3MB 09:20:23 eabd8714fec9 Extracting [====> ] 30.08MB/375MB 09:20:23 b0e0ef7895f4 Downloading [=========================> ] 19.22MB/37.01MB 09:20:23 408012a7b118 Pull complete 09:20:23 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 09:20:23 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 09:20:23 e73cb4a42719 Extracting [==============> ] 30.64MB/109.1MB 09:20:23 1e017ebebdbd Extracting [=====================> ] 16.12MB/37.19MB 09:20:23 55f2b468da67 Downloading [===================> ] 100.6MB/257.9MB 09:20:23 09d5a3f70313 Downloading [==> ] 5.946MB/109.2MB 09:20:23 04f6155c873d Extracting [====> ] 8.913MB/107.3MB 09:20:23 eabd8714fec9 Extracting [====> ] 36.21MB/375MB 09:20:24 b0e0ef7895f4 Downloading [===============================> ] 22.99MB/37.01MB 09:20:24 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB 09:20:24 55f2b468da67 Downloading [======================> ] 114.1MB/257.9MB 09:20:24 e73cb4a42719 Extracting [===============> ] 34.54MB/109.1MB 09:20:24 44986281b8b9 Pull complete 09:20:24 eabd8714fec9 Extracting [======> ] 46.79MB/375MB 09:20:24 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 09:20:24 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 09:20:24 04f6155c873d Extracting [=====> ] 12.26MB/107.3MB 09:20:24 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 09:20:24 b0e0ef7895f4 Downloading [===================================> ] 26MB/37.01MB 09:20:24 1e017ebebdbd Extracting [===============================> ] 23.59MB/37.19MB 09:20:24 55f2b468da67 Downloading [=========================> ] 130.8MB/257.9MB 09:20:24 e73cb4a42719 Extracting [=================> ] 37.32MB/109.1MB 09:20:24 eabd8714fec9 Extracting [=======> ] 55.71MB/375MB 09:20:24 09d5a3f70313 Downloading [=====> ] 11.35MB/109.2MB 09:20:24 04f6155c873d Extracting [=======> ] 15.6MB/107.3MB 09:20:24 b0e0ef7895f4 Downloading [======================================> ] 28.64MB/37.01MB 09:20:24 bf70c5107ab5 Pull complete 09:20:24 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 09:20:24 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 09:20:24 55f2b468da67 Downloading [===========================> ] 140.6MB/257.9MB 09:20:24 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 09:20:24 e73cb4a42719 Extracting [===================> ] 41.78MB/109.1MB 09:20:24 09d5a3f70313 Downloading [=======> ] 15.68MB/109.2MB 09:20:24 eabd8714fec9 Extracting [========> ] 63.5MB/375MB 09:20:24 04f6155c873d Extracting [=======> ] 16.71MB/107.3MB 09:20:24 b0e0ef7895f4 Downloading [============================================> ] 33.16MB/37.01MB 09:20:24 55f2b468da67 Downloading [=============================> ] 153MB/257.9MB 09:20:24 e73cb4a42719 Extracting [=====================> ] 46.79MB/109.1MB 09:20:24 1e017ebebdbd Extracting [=======================================> ] 29.1MB/37.19MB 09:20:24 09d5a3f70313 Downloading [==========> ] 22.71MB/109.2MB 09:20:24 eabd8714fec9 Extracting [=========> ] 74.09MB/375MB 09:20:24 b0e0ef7895f4 Verifying Checksum 09:20:24 b0e0ef7895f4 Download complete 09:20:24 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 09:20:24 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 09:20:24 356f5c2c843b Verifying Checksum 09:20:24 356f5c2c843b Download complete 09:20:24 1ccde423731d Pull complete 09:20:24 7221d93db8a9 Extracting [==================================================>] 100B/100B 09:20:24 7221d93db8a9 Extracting [==================================================>] 100B/100B 09:20:24 04f6155c873d Extracting [========> ] 17.83MB/107.3MB 09:20:24 55f2b468da67 Downloading [================================> ] 167.1MB/257.9MB 09:20:24 1e017ebebdbd Extracting [===========================================> ] 32.24MB/37.19MB 09:20:24 e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB 09:20:24 09d5a3f70313 Downloading [==============> ] 32.44MB/109.2MB 09:20:24 eabd8714fec9 Extracting [===========> ] 84.12MB/375MB 09:20:24 04f6155c873d Extracting [=========> ] 20.05MB/107.3MB 09:20:24 55f2b468da67 Downloading [==================================> ] 180MB/257.9MB 09:20:24 e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 09:20:24 09d5a3f70313 Downloading [====================> ] 45.42MB/109.2MB 09:20:24 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 09:20:24 eabd8714fec9 Extracting [===========> ] 89.69MB/375MB 09:20:24 55f2b468da67 Downloading [=====================================> ] 190.9MB/257.9MB 09:20:24 04f6155c873d Extracting [==========> ] 22.84MB/107.3MB 09:20:24 09d5a3f70313 Downloading [=========================> ] 55.69MB/109.2MB 09:20:24 1e017ebebdbd Extracting [===============================================> ] 35.39MB/37.19MB 09:20:24 e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB 09:20:24 eabd8714fec9 Extracting [============> ] 95.81MB/375MB 09:20:24 55f2b468da67 Downloading [=======================================> ] 203.8MB/257.9MB 09:20:24 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 09:20:24 04f6155c873d Extracting [===========> ] 25.62MB/107.3MB 09:20:24 09d5a3f70313 Downloading [===============================> ] 69.2MB/109.2MB 09:20:24 eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 09:20:24 e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB 09:20:24 55f2b468da67 Downloading [=========================================> ] 215.2MB/257.9MB 09:20:24 7221d93db8a9 Pull complete 09:20:24 7df673c7455d Extracting [==================================================>] 694B/694B 09:20:24 7df673c7455d Extracting [==================================================>] 694B/694B 09:20:24 04f6155c873d Extracting [==============> ] 30.08MB/107.3MB 09:20:24 09d5a3f70313 Downloading [====================================> ] 80.56MB/109.2MB 09:20:24 e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 09:20:24 eabd8714fec9 Extracting [==============> ] 106.4MB/375MB 09:20:24 1e017ebebdbd Pull complete 09:20:24 55f2b468da67 Downloading [===========================================> ] 226MB/257.9MB 09:20:25 09d5a3f70313 Downloading [===========================================> ] 95.7MB/109.2MB 09:20:25 04f6155c873d Extracting [================> ] 34.54MB/107.3MB 09:20:25 eabd8714fec9 Extracting [==============> ] 110.3MB/375MB 09:20:25 e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 09:20:25 55f2b468da67 Downloading [==============================================> ] 240.6MB/257.9MB 09:20:25 09d5a3f70313 Verifying Checksum 09:20:25 09d5a3f70313 Download complete 09:20:25 04f6155c873d Extracting [=================> ] 37.88MB/107.3MB 09:20:25 e73cb4a42719 Extracting [==============================> ] 66.29MB/109.1MB 09:20:25 eabd8714fec9 Extracting [===============> ] 114.8MB/375MB 09:20:25 55f2b468da67 Downloading [=================================================> ] 254.7MB/257.9MB 09:20:25 55f2b468da67 Verifying Checksum 09:20:25 55f2b468da67 Download complete 09:20:25 04f6155c873d Extracting [==================> ] 40.67MB/107.3MB 09:20:25 eabd8714fec9 Extracting [===============> ] 119.2MB/375MB 09:20:25 e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB 09:20:25 04f6155c873d Extracting [====================> ] 44.56MB/107.3MB 09:20:25 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 09:20:25 e73cb4a42719 Extracting [==================================> ] 74.65MB/109.1MB 09:20:25 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 09:20:25 7df673c7455d Pull complete 09:20:25 e73cb4a42719 Extracting [===================================> ] 76.87MB/109.1MB 09:20:25 55f2b468da67 Extracting [> ] 3.899MB/257.9MB 09:20:25 04f6155c873d Extracting [=====================> ] 46.24MB/107.3MB 09:20:25 eabd8714fec9 Extracting [================> ] 125.9MB/375MB 09:20:25 eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 09:20:25 04f6155c873d Extracting [======================> ] 49.02MB/107.3MB 09:20:25 55f2b468da67 Extracting [==> ] 11.7MB/257.9MB 09:20:25 e73cb4a42719 Extracting [====================================> ] 78.54MB/109.1MB 09:20:25 eabd8714fec9 Extracting [=================> ] 130.4MB/375MB 09:20:25 04f6155c873d Extracting [========================> ] 52.36MB/107.3MB 09:20:25 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB 09:20:25 e73cb4a42719 Extracting [=====================================> ] 82.44MB/109.1MB 09:20:25 eabd8714fec9 Extracting [=================> ] 134.8MB/375MB 09:20:25 04f6155c873d Extracting [==========================> ] 56.26MB/107.3MB 09:20:25 e73cb4a42719 Extracting [=======================================> ] 86.34MB/109.1MB 09:20:26 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 09:20:26 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 09:20:26 04f6155c873d Extracting [===========================> ] 59.6MB/107.3MB 09:20:26 e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 09:20:26 55f2b468da67 Extracting [======> ] 33.98MB/257.9MB 09:20:26 eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 09:20:26 04f6155c873d Extracting [=============================> ] 64.06MB/107.3MB 09:20:26 e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 09:20:26 55f2b468da67 Extracting [========> ] 45.68MB/257.9MB 09:20:26 eabd8714fec9 Extracting [===================> ] 144.8MB/375MB 09:20:26 04f6155c873d Extracting [===============================> ] 66.85MB/107.3MB 09:20:26 e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 09:20:26 55f2b468da67 Extracting [==========> ] 54.03MB/257.9MB 09:20:26 eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 09:20:26 04f6155c873d Extracting [================================> ] 69.63MB/107.3MB 09:20:26 e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 09:20:26 55f2b468da67 Extracting [============> ] 65.18MB/257.9MB 09:20:26 eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 09:20:26 04f6155c873d Extracting [==================================> ] 73.53MB/107.3MB 09:20:26 e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 09:20:26 55f2b468da67 Extracting [==============> ] 76.32MB/257.9MB 09:20:26 eabd8714fec9 Extracting [====================> ] 154.9MB/375MB 09:20:26 04f6155c873d Extracting [===================================> ] 76.32MB/107.3MB 09:20:26 e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 09:20:26 55f2b468da67 Extracting [================> ] 85.79MB/257.9MB 09:20:26 eabd8714fec9 Extracting [=====================> ] 157.6MB/375MB 09:20:26 04f6155c873d Extracting [=====================================> ] 79.66MB/107.3MB 09:20:26 e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 09:20:26 55f2b468da67 Extracting [==================> ] 97.48MB/257.9MB 09:20:26 eabd8714fec9 Extracting [=====================> ] 161.5MB/375MB 09:20:26 04f6155c873d Extracting [======================================> ] 83MB/107.3MB 09:20:26 55f2b468da67 Extracting [===================> ] 103.1MB/257.9MB 09:20:26 eabd8714fec9 Extracting [=====================> ] 163.8MB/375MB 09:20:26 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 09:20:26 04f6155c873d Extracting [========================================> ] 87.46MB/107.3MB 09:20:26 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB 09:20:26 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 09:20:26 eabd8714fec9 Extracting [======================> ] 167.7MB/375MB 09:20:27 04f6155c873d Extracting [===========================================> ] 93.03MB/107.3MB 09:20:27 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 09:20:27 eabd8714fec9 Extracting [=======================> ] 174.9MB/375MB 09:20:27 04f6155c873d Extracting [==============================================> ] 99.16MB/107.3MB 09:20:27 55f2b468da67 Extracting [======================> ] 115.9MB/257.9MB 09:20:27 eabd8714fec9 Extracting [=========================> ] 189.4MB/375MB 09:20:27 04f6155c873d Extracting [===============================================> ] 101.4MB/107.3MB 09:20:27 55f2b468da67 Extracting [======================> ] 117.5MB/257.9MB 09:20:27 eabd8714fec9 Extracting [==========================> ] 196.6MB/375MB 09:20:27 55f2b468da67 Extracting [=======================> ] 122.6MB/257.9MB 09:20:27 04f6155c873d Extracting [================================================> ] 103.6MB/107.3MB 09:20:27 eabd8714fec9 Extracting [===========================> ] 206.1MB/375MB 09:20:28 55f2b468da67 Extracting [========================> ] 127MB/257.9MB 09:20:28 eabd8714fec9 Extracting [============================> ] 212.2MB/375MB 09:20:28 04f6155c873d Extracting [================================================> ] 104.7MB/107.3MB 09:20:28 eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 09:20:28 55f2b468da67 Extracting [=========================> ] 129.8MB/257.9MB 09:20:28 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 09:20:28 eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 09:20:28 55f2b468da67 Extracting [=========================> ] 133.1MB/257.9MB 09:20:28 eabd8714fec9 Extracting [=============================> ] 223.9MB/375MB 09:20:28 55f2b468da67 Extracting [==========================> ] 137.6MB/257.9MB 09:20:28 eabd8714fec9 Extracting [==============================> ] 230.1MB/375MB 09:20:28 55f2b468da67 Extracting [===========================> ] 140.9MB/257.9MB 09:20:28 eabd8714fec9 Extracting [==============================> ] 231.2MB/375MB 09:20:28 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB 09:20:28 prometheus Pulled 09:20:28 eabd8714fec9 Extracting [===============================> ] 233.4MB/375MB 09:20:28 e73cb4a42719 Pull complete 09:20:28 55f2b468da67 Extracting [============================> ] 145.9MB/257.9MB 09:20:28 eabd8714fec9 Extracting [===============================> ] 237.3MB/375MB 09:20:28 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB 09:20:28 eabd8714fec9 Extracting [================================> ] 241.2MB/375MB 09:20:29 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB 09:20:29 eabd8714fec9 Extracting [================================> ] 245.7MB/375MB 09:20:29 55f2b468da67 Extracting [==============================> ] 156.5MB/257.9MB 09:20:29 eabd8714fec9 Extracting [=================================> ] 250.1MB/375MB 09:20:29 55f2b468da67 Extracting [===============================> ] 161MB/257.9MB 09:20:29 04f6155c873d Pull complete 09:20:29 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 09:20:29 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 09:20:29 55f2b468da67 Extracting [===============================> ] 162.7MB/257.9MB 09:20:29 eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 09:20:29 55f2b468da67 Extracting [================================> ] 167.7MB/257.9MB 09:20:29 eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB 09:20:29 eabd8714fec9 Extracting [===================================> ] 263.5MB/375MB 09:20:29 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 09:20:29 eabd8714fec9 Extracting [===================================> ] 267.4MB/375MB 09:20:29 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 09:20:29 eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 09:20:29 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 09:20:30 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 09:20:30 a83b68436f09 Pull complete 09:20:30 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 09:20:30 eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB 09:20:30 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 09:20:30 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB 09:20:30 787d6bee9571 Extracting [==================================================>] 127B/127B 09:20:30 787d6bee9571 Extracting [==================================================>] 127B/127B 09:20:30 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 09:20:30 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 09:20:30 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB 09:20:30 787d6bee9571 Pull complete 09:20:30 13ff0988aaea Extracting [==================================================>] 167B/167B 09:20:30 13ff0988aaea Extracting [==================================================>] 167B/167B 09:20:30 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 09:20:30 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 09:20:30 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB 09:20:30 55f2b468da67 Extracting [==================================> ] 179.9MB/257.9MB 09:20:30 13ff0988aaea Pull complete 09:20:30 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 09:20:30 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 09:20:30 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB 09:20:30 eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 09:20:31 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB 09:20:31 eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 09:20:31 4b82842ab819 Pull complete 09:20:31 7e568a0dc8fb Extracting [==================================================>] 184B/184B 09:20:31 7e568a0dc8fb Extracting [==================================================>] 184B/184B 09:20:31 85dde7dceb0a Extracting [===> ] 3.899MB/63.48MB 09:20:31 55f2b468da67 Extracting [====================================> ] 186.1MB/257.9MB 09:20:31 eabd8714fec9 Extracting [=====================================> ] 278MB/375MB 09:20:31 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB 09:20:31 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB 09:20:31 eabd8714fec9 Extracting [=====================================> ] 282.4MB/375MB 09:20:31 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB 09:20:31 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB 09:20:31 eabd8714fec9 Extracting [======================================> ] 286.3MB/375MB 09:20:31 eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB 09:20:31 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB 09:20:31 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB 09:20:31 eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB 09:20:32 eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 09:20:32 7e568a0dc8fb Pull complete 09:20:32 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 09:20:32 eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 09:20:32 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB 09:20:32 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 09:20:32 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 09:20:32 85dde7dceb0a Extracting [========> ] 10.58MB/63.48MB 09:20:32 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 09:20:32 85dde7dceb0a Extracting [=========> ] 11.7MB/63.48MB 09:20:32 55f2b468da67 Extracting [======================================> ] 197.8MB/257.9MB 09:20:32 85dde7dceb0a Extracting [==========> ] 13.37MB/63.48MB 09:20:32 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 09:20:32 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 09:20:32 postgres Pulled 09:20:32 85dde7dceb0a Extracting [============> ] 15.6MB/63.48MB 09:20:32 55f2b468da67 Extracting [=======================================> ] 201.7MB/257.9MB 09:20:32 eabd8714fec9 Extracting [========================================> ] 301.4MB/375MB 09:20:32 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB 09:20:32 eabd8714fec9 Extracting [========================================> ] 303MB/375MB 09:20:32 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 09:20:32 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 09:20:32 eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB 09:20:32 85dde7dceb0a Extracting [=============> ] 17.27MB/63.48MB 09:20:33 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 09:20:33 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 09:20:33 85dde7dceb0a Extracting [===============> ] 19.5MB/63.48MB 09:20:33 85dde7dceb0a Extracting [=================> ] 22.28MB/63.48MB 09:20:33 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 09:20:33 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 09:20:33 85dde7dceb0a Extracting [===================> ] 24.51MB/63.48MB 09:20:33 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 09:20:33 eabd8714fec9 Extracting [=========================================> ] 309.2MB/375MB 09:20:33 85dde7dceb0a Extracting [=====================> ] 27.3MB/63.48MB 09:20:33 eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 09:20:33 85dde7dceb0a Extracting [======================> ] 28.97MB/63.48MB 09:20:33 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 09:20:33 eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 09:20:33 85dde7dceb0a Extracting [========================> ] 31.2MB/63.48MB 09:20:33 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 09:20:33 85dde7dceb0a Extracting [==========================> ] 33.42MB/63.48MB 09:20:33 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 09:20:34 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB 09:20:34 eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB 09:20:34 85dde7dceb0a Extracting [===========================> ] 34.54MB/63.48MB 09:20:34 55f2b468da67 Extracting [==========================================> ] 217.8MB/257.9MB 09:20:34 eabd8714fec9 Extracting [==========================================> ] 317MB/375MB 09:20:34 85dde7dceb0a Extracting [============================> ] 36.21MB/63.48MB 09:20:34 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 09:20:34 85dde7dceb0a Extracting [=============================> ] 37.88MB/63.48MB 09:20:34 eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 09:20:34 85dde7dceb0a Extracting [===============================> ] 40.11MB/63.48MB 09:20:34 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB 09:20:34 eabd8714fec9 Extracting [==========================================> ] 322MB/375MB 09:20:34 85dde7dceb0a Extracting [=================================> ] 42.89MB/63.48MB 09:20:34 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB 09:20:34 eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB 09:20:34 85dde7dceb0a Extracting [==================================> ] 44.01MB/63.48MB 09:20:34 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 09:20:34 eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 09:20:34 85dde7dceb0a Extracting [=====================================> ] 47.35MB/63.48MB 09:20:34 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 09:20:34 eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 09:20:34 85dde7dceb0a Extracting [=======================================> ] 50.14MB/63.48MB 09:20:34 eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 09:20:34 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB 09:20:34 85dde7dceb0a Extracting [=========================================> ] 52.36MB/63.48MB 09:20:34 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 09:20:35 85dde7dceb0a Extracting [=========================================> ] 52.92MB/63.48MB 09:20:35 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 09:20:35 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 09:20:35 85dde7dceb0a Extracting [============================================> ] 56.26MB/63.48MB 09:20:35 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 09:20:35 eabd8714fec9 Extracting [============================================> ] 332MB/375MB 09:20:35 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 09:20:35 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 09:20:35 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 09:20:35 eabd8714fec9 Extracting [============================================> ] 335.9MB/375MB 09:20:35 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 09:20:35 eabd8714fec9 Extracting [=============================================> ] 338.1MB/375MB 09:20:35 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 09:20:35 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB 09:20:35 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 09:20:35 55f2b468da67 Extracting [==============================================> ] 239.5MB/257.9MB 09:20:35 85dde7dceb0a Extracting [=================================================> ] 62.95MB/63.48MB 09:20:35 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 09:20:35 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 09:20:35 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 09:20:35 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 09:20:36 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 09:20:36 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 09:20:36 55f2b468da67 Extracting [===============================================> ] 245.1MB/257.9MB 09:20:36 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 09:20:36 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 09:20:36 55f2b468da67 Extracting [=================================================> ] 256.8MB/257.9MB 09:20:36 eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 09:20:36 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 09:20:37 eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 09:20:37 85dde7dceb0a Pull complete 09:20:37 eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB 09:20:37 eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 09:20:37 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 09:20:37 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 09:20:37 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 09:20:37 eabd8714fec9 Extracting [================================================> ] 362.1MB/375MB 09:20:37 eabd8714fec9 Extracting [=================================================> ] 368.8MB/375MB 09:20:37 eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB 09:20:38 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 09:20:39 55f2b468da67 Pull complete 09:20:39 7009d5001b77 Pull complete 09:20:39 eabd8714fec9 Pull complete 09:20:39 82bfc142787e Extracting [> ] 98.3kB/8.613MB 09:20:39 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09:20:39 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09:20:39 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 09:20:39 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 09:20:39 82bfc142787e Extracting [=========================> ] 4.325MB/8.613MB 09:20:39 45fd2fec8a19 Pull complete 09:20:39 538deb30e80c Pull complete 09:20:39 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 09:20:39 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 09:20:39 grafana Pulled 09:20:39 82bfc142787e Pull complete 09:20:39 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 09:20:39 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 09:20:39 8f10199ed94b Extracting [====================> ] 3.539MB/8.768MB 09:20:39 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 09:20:39 46baca71a4ef Pull complete 09:20:39 8f10199ed94b Pull complete 09:20:39 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09:20:39 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09:20:39 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 09:20:39 f963a77d2726 Pull complete 09:20:39 b0e0ef7895f4 Extracting [====================> ] 14.94MB/37.01MB 09:20:39 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09:20:39 b0e0ef7895f4 Extracting [============================================> ] 32.64MB/37.01MB 09:20:39 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 09:20:39 b0e0ef7895f4 Pull complete 09:20:39 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 09:20:39 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 09:20:39 f3a82e9f1761 Extracting [============> ] 11.01MB/44.41MB 09:20:39 c0c90eeb8aca Pull complete 09:20:39 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 09:20:39 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 09:20:39 f3a82e9f1761 Extracting [=====================> ] 18.81MB/44.41MB 09:20:39 5cfb27c10ea5 Pull complete 09:20:39 40a5eed61bb0 Extracting [==================================================>] 98B/98B 09:20:39 40a5eed61bb0 Extracting [==================================================>] 98B/98B 09:20:39 f3a82e9f1761 Extracting [====================================> ] 32.57MB/44.41MB 09:20:40 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 09:20:40 40a5eed61bb0 Pull complete 09:20:40 e040ea11fa10 Extracting [==================================================>] 173B/173B 09:20:40 e040ea11fa10 Extracting [==================================================>] 173B/173B 09:20:40 f3a82e9f1761 Pull complete 09:20:40 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 09:20:40 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 09:20:40 e040ea11fa10 Pull complete 09:20:40 79161a3f5362 Pull complete 09:20:40 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 09:20:40 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 09:20:40 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 09:20:40 9c266ba63f51 Pull complete 09:20:40 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 09:20:40 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 09:20:40 09d5a3f70313 Extracting [======> ] 14.48MB/109.2MB 09:20:40 2e8a7df9c2ee Pull complete 09:20:40 10f05dd8b1db Extracting [==================================================>] 98B/98B 09:20:40 10f05dd8b1db Extracting [==================================================>] 98B/98B 09:20:40 09d5a3f70313 Extracting [=============> ] 28.97MB/109.2MB 09:20:40 10f05dd8b1db Pull complete 09:20:40 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09:20:40 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09:20:40 09d5a3f70313 Extracting [====================> ] 45.68MB/109.2MB 09:20:40 09d5a3f70313 Extracting [===========================> ] 60.72MB/109.2MB 09:20:40 41dac8b43ba6 Pull complete 09:20:40 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 09:20:40 09d5a3f70313 Extracting [==================================> ] 75.76MB/109.2MB 09:20:40 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 09:20:40 71a9f6a9ab4d Pull complete 09:20:40 09d5a3f70313 Extracting [========================================> ] 89.13MB/109.2MB 09:20:40 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 09:20:40 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB 09:20:41 da3ed5db7103 Extracting [=====> ] 14.48MB/127.4MB 09:20:41 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09:20:41 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09:20:41 da3ed5db7103 Extracting [===========> ] 28.41MB/127.4MB 09:20:41 09d5a3f70313 Pull complete 09:20:41 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 09:20:41 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 09:20:41 da3ed5db7103 Extracting [=================> ] 44.56MB/127.4MB 09:20:41 356f5c2c843b Pull complete 09:20:41 kafka Pulled 09:20:41 da3ed5db7103 Extracting [========================> ] 61.28MB/127.4MB 09:20:41 da3ed5db7103 Extracting [===============================> ] 80.77MB/127.4MB 09:20:41 da3ed5db7103 Extracting [======================================> ] 99.16MB/127.4MB 09:20:41 da3ed5db7103 Extracting [=============================================> ] 114.8MB/127.4MB 09:20:41 da3ed5db7103 Extracting [===============================================> ] 121.4MB/127.4MB 09:20:41 da3ed5db7103 Extracting [=================================================> ] 126.5MB/127.4MB 09:20:41 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 09:20:46 da3ed5db7103 Pull complete 09:20:46 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 09:20:46 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 09:20:46 c955f6e31a04 Pull complete 09:20:46 zookeeper Pulled 09:20:46 Network compose_default Creating 09:20:46 Network compose_default Created 09:20:46 Container zookeeper Creating 09:20:46 Container prometheus Creating 09:20:46 Container postgres Creating 09:20:51 Container zookeeper Created 09:20:51 Container postgres Created 09:20:51 Container kafka Creating 09:20:51 Container policy-db-migrator Creating 09:20:51 Container prometheus Created 09:20:51 Container grafana Creating 09:20:51 Container grafana Created 09:20:51 Container policy-db-migrator Created 09:20:51 Container policy-api Creating 09:20:51 Container kafka Created 09:20:51 Container policy-api Created 09:20:51 Container policy-pap Creating 09:20:51 Container policy-pap Created 09:20:51 Container policy-opa-pdp Creating 09:20:51 Container policy-opa-pdp Created 09:20:51 Container prometheus Starting 09:20:51 Container zookeeper Starting 09:20:51 Container postgres Starting 09:20:51 Container zookeeper Started 09:20:51 Container kafka Starting 09:20:53 Container kafka Started 09:20:53 Container prometheus Started 09:20:53 Container grafana Starting 09:20:53 Container grafana Started 09:20:53 Container postgres Started 09:20:53 Container policy-db-migrator Starting 09:20:54 Container policy-db-migrator Started 09:20:54 Container policy-api Starting 09:20:54 Container policy-api Started 09:20:54 Container policy-pap Starting 09:20:55 Container policy-pap Started 09:20:55 Container policy-opa-pdp Starting 09:20:56 Container policy-opa-pdp Started 09:20:56 Prometheus server: http://localhost:30259 09:20:56 Grafana server: http://localhost:30269 09:20:56 Waiting 3 minutes for OPA-PDP to start... 09:23:56 Checking if REST port 30003 is open on localhost ... 09:23:56 IMAGE NAMES STATUS 09:23:56 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 09:23:56 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 09:23:56 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 09:23:56 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 09:23:56 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 09:23:56 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 09:23:56 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 09:23:56 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 09:23:56 Checking if REST port 30012 is open on localhost ... 09:23:56 IMAGE NAMES STATUS 09:23:56 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 09:23:56 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 09:23:56 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 09:23:56 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 09:23:56 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 09:23:56 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 09:23:56 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 09:23:56 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 09:23:56 Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/resources/tests/models'... 09:23:57 Building robot framework docker image 09:24:51 sha256:459837eec75c6922d8d4447b11f7b963ed3c57c77793998655ca826ee28c8473 09:24:55 top - 09:24:55 up 6 min, 0 users, load average: 1.41, 1.57, 0.81 09:24:55 Tasks: 220 total, 1 running, 149 sleeping, 0 stopped, 0 zombie 09:24:55 %Cpu(s): 10.7 us, 2.9 sy, 0.0 ni, 81.7 id, 4.5 wa, 0.0 hi, 0.1 si, 0.1 st 09:24:55 09:24:55 total used free shared buff/cache available 09:24:55 Mem: 31G 2.3G 21G 28M 7.3G 28G 09:24:55 Swap: 1.0G 0B 1.0G 09:24:55 09:24:55 IMAGE NAMES STATUS 09:24:55 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 09:24:55 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 09:24:55 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 4 minutes 09:24:55 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 4 minutes 09:24:55 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 4 minutes 09:24:55 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 4 minutes 09:24:55 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 4 minutes 09:24:55 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 4 minutes 09:24:55 09:24:58 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 09:24:58 e5ef7ade61ed policy-opa-pdp 0.17% 12.19MiB / 31.41GiB 0.04% 79.2kB / 74.7kB 0B / 0B 20 09:24:58 9375296a2ba2 policy-pap 1.46% 509.2MiB / 31.41GiB 1.58% 2.21MB / 1.22MB 0B / 139MB 68 09:24:58 b80eb2275872 policy-api 0.10% 414.4MiB / 31.41GiB 1.29% 1.15MB / 1.05MB 0B / 0B 60 09:24:58 de5cacec05f6 grafana 0.22% 103.5MiB / 31.41GiB 0.32% 19MB / 230kB 0B / 30.6MB 19 09:24:58 d5f45e4257ab kafka 1.21% 392.9MiB / 31.41GiB 1.22% 299kB / 285kB 0B / 725kB 83 09:24:58 9d9997688116 zookeeper 0.08% 89.54MiB / 31.41GiB 0.28% 58.1kB / 49.3kB 4.1kB / 397kB 62 09:24:58 c6b82f7d1680 postgres 0.01% 86.21MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 127kB / 158MB 26 09:24:58 ff4d803dbda1 prometheus 0.31% 20.96MiB / 31.41GiB 0.07% 203kB / 9.51kB 0B / 0B 11 09:24:58 09:24:58 Container policy-csit Creating 09:24:58 Container policy-csit Created 09:24:58 Attaching to policy-csit 09:24:59 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 09:24:59 policy-csit | Run Robot test 09:24:59 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 09:24:59 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 09:24:59 policy-csit | -v POLICY_API_IP:policy-api:6969 09:24:59 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 09:24:59 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 09:24:59 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 09:24:59 policy-csit | -v APEX_IP:policy-apex-pdp:6969 09:24:59 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 09:24:59 policy-csit | -v KAFKA_IP:kafka:9092 09:24:59 policy-csit | -v PROMETHEUS_IP:prometheus:9090 09:24:59 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 09:24:59 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 09:24:59 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 09:24:59 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 09:24:59 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 09:24:59 policy-csit | -v TEMP_FOLDER:/tmp/distribution 09:24:59 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 09:24:59 policy-csit | -v TEST_ENV:docker 09:24:59 policy-csit | -v JAEGER_IP:jaeger:16686 09:24:59 policy-csit | Starting Robot test suites ... 09:24:59 policy-csit | ============================================================================== 09:24:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 09:24:59 policy-csit | ============================================================================== 09:24:59 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 09:24:59 policy-csit | ============================================================================== 09:24:59 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 09:24:59 policy-csit | ------------------------------------------------------------------------------ 09:24:59 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 09:24:59 policy-csit | ------------------------------------------------------------------------------ 09:25:26 policy-csit | ValidatesZonePolicy | PASS | 09:25:26 policy-csit | ------------------------------------------------------------------------------ 09:25:51 policy-csit | ValidatesVehiclePolicy | PASS | 09:25:51 policy-csit | ------------------------------------------------------------------------------ 09:26:17 policy-csit | ValidatesAbacPolicy | PASS | 09:26:17 policy-csit | ------------------------------------------------------------------------------ 09:26:17 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 09:26:17 policy-csit | 5 tests, 5 passed, 0 failed 09:26:17 policy-csit | ============================================================================== 09:26:17 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 09:26:17 policy-csit | ============================================================================== 09:27:17 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 09:27:17 policy-csit | ------------------------------------------------------------------------------ 09:27:17 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 09:27:17 policy-csit | ------------------------------------------------------------------------------ 09:27:17 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 09:27:17 policy-csit | ------------------------------------------------------------------------------ 09:27:17 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 09:27:17 policy-csit | ------------------------------------------------------------------------------ 09:27:17 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 09:27:17 policy-csit | ------------------------------------------------------------------------------ 09:27:17 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 09:27:17 policy-csit | 5 tests, 5 passed, 0 failed 09:27:17 policy-csit | ============================================================================== 09:27:17 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 09:27:17 policy-csit | 10 tests, 10 passed, 0 failed 09:27:17 policy-csit | ============================================================================== 09:27:17 policy-csit | Output: /tmp/results/output.xml 09:27:17 policy-csit | Log: /tmp/results/log.html 09:27:17 policy-csit | Report: /tmp/results/report.html 09:27:17 policy-csit | RESULT: 0 09:27:18 policy-csit exited with code 0 09:27:18 IMAGE NAMES STATUS 09:27:18 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes 09:27:18 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes 09:27:18 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes 09:27:18 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes 09:27:18 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes 09:27:18 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes 09:27:18 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes 09:27:18 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes 09:27:18 Shut down started! 09:27:19 Collecting logs from docker compose containers... 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.497942784Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-19T09:20:53Z 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498375908Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498393138Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498398978Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498403808Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498409788Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498414508Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498420118Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498425878Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498431018Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498436388Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498446349Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498452839Z level=info msg=Target target=[all] 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498465419Z level=info msg="Path Home" path=/usr/share/grafana 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498470519Z level=info msg="Path Data" path=/var/lib/grafana 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498477349Z level=info msg="Path Logs" path=/var/log/grafana 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498481799Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498486309Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 09:27:20 grafana | logger=settings t=2025-06-19T09:20:53.498491249Z level=info msg="App mode production" 09:27:20 grafana | logger=featuremgmt t=2025-06-19T09:20:53.499019474Z level=info msg=FeatureToggles alertingApiServer=true lokiStructuredMetadata=true lokiQueryHints=true alertingRuleRecoverDeleted=true logsPanelControls=true prometheusUsesCombobox=true lokiLabelNamesQueryApi=true unifiedRequestLog=true correlations=true formatString=true dataplaneFrontendFallback=true alertingQueryAndExpressionsStepMode=true recordedQueriesMulti=true logsExploreTableVisualisation=true recoveryThreshold=true alertingInsights=true tlsMemcached=true dashboardScene=true dashboardSceneSolo=true externalCorePlugins=true newDashboardSharingComponent=true influxdbBackendMigration=true alertingSimplifiedRouting=true kubernetesClientDashboardsFolders=true useSessionStorageForRedirection=true pinNavItems=true lokiQuerySplitting=true awsAsyncQueryCaching=true unifiedStorageSearchPermissionFiltering=true publicDashboardsScene=true logRowsPopoverMenu=true onPremToCloudMigrations=true kubernetesPlaylists=true promQLScope=true addFieldFromCalculationStatFunctions=true cloudWatchCrossAccountQuerying=true azureMonitorPrometheusExemplars=true groupToNestedTableTransformation=true alertingRuleVersionHistoryRestore=true logsContextDatasourceUi=true alertRuleRestore=true newPDFRendering=true prometheusAzureOverrideAudience=true dashgpt=true logsInfiniteScrolling=true annotationPermissionUpdate=true dashboardSceneForViewers=true grafanaconThemes=true ssoSettingsSAML=true reportingUseRawTimeRange=true transformationsRedesign=true alertingNotificationsStepMode=true cloudWatchRoundUpEndTime=true ssoSettingsApi=true failWrongDSUID=true alertingRulePermanentlyDelete=true preinstallAutoUpdate=true nestedFolders=true panelMonitoring=true azureMonitorEnableUserAuth=true cloudWatchNewLabelParsing=true pluginsDetailsRightPanel=true alertingUIOptimizeReducer=true newFiltersUI=true angularDeprecationUI=true 09:27:20 grafana | logger=sqlstore t=2025-06-19T09:20:53.499105305Z level=info msg="Connecting to DB" dbtype=sqlite3 09:27:20 grafana | logger=sqlstore t=2025-06-19T09:20:53.499128465Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.501155075Z level=info msg="Locking database" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.501172066Z level=info msg="Starting DB migrations" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.501987544Z level=info msg="Executing migration" id="create migration_log table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.503106445Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.118501ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.508925862Z level=info msg="Executing migration" id="create user table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.509615549Z level=info msg="Migration successfully executed" id="create user table" duration=689.297µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.516371155Z level=info msg="Executing migration" id="add unique index user.login" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.517274764Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=906.879µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.521319654Z level=info msg="Executing migration" id="add unique index user.email" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.522141463Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=821.789µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.529584406Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.531374713Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.788067ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.535690676Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.536655286Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=960.16µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.540952408Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.543438643Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.485755ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.549857146Z level=info msg="Executing migration" id="create user table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.550699075Z level=info msg="Migration successfully executed" id="create user table v2" duration=841.869µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.558751514Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.559511511Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=759.917µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.566609532Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.567687132Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.07767ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.573058875Z level=info msg="Executing migration" id="copy data_source v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.573427289Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=368.274µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.577791762Z level=info msg="Executing migration" id="Drop old table user_v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.578304987Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=512.955µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.581507649Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.5826272Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.119121ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.585599739Z level=info msg="Executing migration" id="Update user table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.585627609Z level=info msg="Migration successfully executed" id="Update user table charset" duration=28.41µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.590973292Z level=info msg="Executing migration" id="Add last_seen_at column to user" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.592111213Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.139551ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.595183924Z level=info msg="Executing migration" id="Add missing user data" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.595457897Z level=info msg="Migration successfully executed" id="Add missing user data" duration=273.813µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.599908951Z level=info msg="Executing migration" id="Add is_disabled column to user" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.601015852Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.1064ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.603998601Z level=info msg="Executing migration" id="Add index user.login/user.email" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.604718458Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=716.657µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.611422874Z level=info msg="Executing migration" id="Add is_service_account column to user" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.612568245Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.142261ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.61708754Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.626271761Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.183911ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.630707944Z level=info msg="Executing migration" id="Add uid column to user" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.631804055Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.095631ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.635755364Z level=info msg="Executing migration" id="Update uid column values for users" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.635956136Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=200.642µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.642343999Z level=info msg="Executing migration" id="Add unique index user_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.643243588Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=899.349µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.646590711Z level=info msg="Executing migration" id="Add is_provisioned column to user" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.647964135Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.372364ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.652215287Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.652711862Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=529.605µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.659531179Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.66061376Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=1.083211ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.665041774Z level=info msg="Executing migration" id="update login and email fields to lowercase" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.665440678Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=398.864µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.670386206Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.670855581Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=469.405µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.675394176Z level=info msg="Executing migration" id="create temp user table v1-7" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.676233414Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=839.008µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.682278514Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.682982201Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=705.687µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.68693577Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.687618596Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=682.396µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.700983029Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.702466173Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.482324ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.708396792Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.708911247Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=512.255µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.713058718Z level=info msg="Executing migration" id="Update temp_user table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.713073728Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=15.29µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.716891786Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.71731856Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=426.534µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.721149998Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.721572752Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=422.704µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.727290158Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.727827174Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=536.916µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.732012995Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.732462589Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=449.554µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.739098005Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.741324527Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.226112ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.745259516Z level=info msg="Executing migration" id="create temp_user v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.745838932Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=579.416µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.749590719Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.750083984Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=493.005µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.753905961Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.754537188Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=632.677µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.760055952Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.761182013Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.122661ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.764436785Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.765534536Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.097081ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.7699637Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.770364094Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=399.734µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.776267382Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.776777627Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=509.795µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.780946978Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.781718296Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=771.158µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.786758895Z level=info msg="Executing migration" id="create star table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.78721169Z level=info msg="Migration successfully executed" id="create star table" duration=452.835µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.793399821Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.793918716Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=521.145µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.797884586Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.798857745Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=972.769µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.803089907Z level=info msg="Executing migration" id="Add column org_id in star" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.804049396Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=958.889µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.807277208Z level=info msg="Executing migration" id="Add column updated in star" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.8084524Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.174532ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.814911184Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.815449289Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=537.475µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.819327817Z level=info msg="Executing migration" id="create org table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.819839522Z level=info msg="Migration successfully executed" id="create org table v1" duration=511.245µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.823835342Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.824335687Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=497.605µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.827931282Z level=info msg="Executing migration" id="create org_user table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.828395447Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=463.585µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.834352985Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.835013492Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=659.837µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.837924251Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.838606208Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=682.207µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.841470956Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.842006241Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=532.245µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.844755858Z level=info msg="Executing migration" id="Update org table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.844773329Z level=info msg="Migration successfully executed" id="Update org table charset" duration=17.351µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.850584896Z level=info msg="Executing migration" id="Update org_user table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.850603836Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=19.02µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.853275033Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.853401444Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=126.201µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.857245932Z level=info msg="Executing migration" id="create dashboard table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.857774847Z level=info msg="Migration successfully executed" id="create dashboard table" duration=528.635µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.860688026Z level=info msg="Executing migration" id="add index dashboard.account_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.861432013Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=742.977µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.866834406Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.867373662Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=538.966µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.871184879Z level=info msg="Executing migration" id="create dashboard_tag table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.871641764Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=456.755µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.874676404Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.87524607Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=568.966µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.881744034Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.883352909Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.606615ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.887932295Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.89557074Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.635815ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.900697481Z level=info msg="Executing migration" id="create dashboard v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.901379298Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=681.977µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.910060743Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.911527188Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.464555ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.916310225Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.918031982Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.720987ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.923011711Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:53.923412475Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=400.354µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.011828509Z level=info msg="Executing migration" id="drop table dashboard_v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.012987791Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.163002ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.075157348Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.075227349Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=74.201µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.093177047Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.09651004Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.334433ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.10453041Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.1075929Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.06089ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.11758602Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.119212866Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.628436ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.122991514Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.123564539Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=572.775µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.128313977Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.130624Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.308303ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.133939962Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.135173585Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.233493ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.138446427Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.139217765Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=770.978µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.155104853Z level=info msg="Executing migration" id="Update dashboard table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.155154064Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=50.321µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.160449936Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.160575598Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=133.072µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.164919211Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.167503557Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.585185ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.170450686Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.172509946Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.0588ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.176003321Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.178108842Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.105431ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.182336094Z level=info msg="Executing migration" id="Add column uid in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.185158512Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.821268ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.188566706Z level=info msg="Executing migration" id="Update uid column values in dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.188890169Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=322.493µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.191426785Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.192242703Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=815.538µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.206305953Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.207340953Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.0363ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.210436674Z level=info msg="Executing migration" id="Update dashboard title length" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.210461484Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=20.96µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.213795047Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.219765697Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=5.965669ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.224654265Z level=info msg="Executing migration" id="create dashboard_provisioning" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.225748696Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.094461ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.242008598Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.246366941Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.360893ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.249537033Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.250119329Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=582.706µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.254124018Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.255373361Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.251613ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.261462671Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.262487031Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.02387ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.265834955Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.266077267Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=241.762µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.269165868Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.269567922Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=401.844µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.273949795Z level=info msg="Executing migration" id="Add check_sum column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.277886405Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.93694ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.281674292Z level=info msg="Executing migration" id="Add index for dashboard_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.282777653Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.101451ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.287036416Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.287297018Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=263.532µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.29448137Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.294892804Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=419.954µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.298183787Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.299181496Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.000049ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.302851543Z level=info msg="Executing migration" id="Add isPublic for dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.305008854Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.156471ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.308200146Z level=info msg="Executing migration" id="Add deleted for dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.310215786Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.01517ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.31557604Z level=info msg="Executing migration" id="Add index for deleted" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.316267416Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=691.436µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.319589919Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.3216194Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.028551ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.325125675Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.326878022Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=1.752207ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.331750471Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.332230325Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=480.614µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.337334396Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.339795191Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.460625ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.344839221Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.345596788Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=757.367µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.350304575Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.350862201Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=557.276µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.354443266Z level=info msg="Executing migration" id="create data_source table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.355430196Z level=info msg="Migration successfully executed" id="create data_source table" duration=971.38µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.359584788Z level=info msg="Executing migration" id="add index data_source.account_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.360413706Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=827.168µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.365429816Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.366438666Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.00882ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.371623857Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.372195393Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=570.686µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.376127312Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.376742128Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=613.916µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.382635147Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.393087971Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=10.441824ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.396976519Z level=info msg="Executing migration" id="create data_source table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.397971269Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=994µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.401699196Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.402828488Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.130862ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.407328012Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.408398973Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.069611ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.414409603Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.415451863Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.0415ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.419646955Z level=info msg="Executing migration" id="Add column with_credentials" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.422245501Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.597866ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.4885449Z level=info msg="Executing migration" id="Add secure json data column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.491769492Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.227512ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.498979114Z level=info msg="Executing migration" id="Update data_source table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.499007035Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=28.561µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.503531229Z level=info msg="Executing migration" id="Update initial version to 1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.503797962Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=263.863µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.506850843Z level=info msg="Executing migration" id="Add read_only data column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.509008794Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.157531ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.51460494Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.514772251Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=167.421µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.518408867Z level=info msg="Executing migration" id="Update json_data with nulls" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.518610189Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=201.332µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.521781121Z level=info msg="Executing migration" id="Add uid column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.524580789Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.799228ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.530052603Z level=info msg="Executing migration" id="Update uid value" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.530217015Z level=info msg="Migration successfully executed" id="Update uid value" duration=163.642µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.535124304Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.535991292Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=868.808µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.540586238Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.541723369Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.135631ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.54679073Z level=info msg="Executing migration" id="Add is_prunable column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.55380576Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=7.00853ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.558700609Z level=info msg="Executing migration" id="Add api_version column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.561694048Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.98808ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.58094269Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.58103803Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=81.71µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.585184172Z level=info msg="Executing migration" id="create api_key table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.586661187Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.476745ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.599642356Z level=info msg="Executing migration" id="add index api_key.account_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.60111918Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.476444ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.604958399Z level=info msg="Executing migration" id="add index api_key.key" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.606366133Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.408303ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.611485003Z level=info msg="Executing migration" id="add index api_key.account_id_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.612540354Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.058161ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.615623465Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.61619155Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=564.695µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.620020708Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.620556624Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=535.646µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.625339951Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.626081039Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=740.908µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.630374331Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.639647134Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.269713ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.64426056Z level=info msg="Executing migration" id="create api_key table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.645138088Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=881.659µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.65036588Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.651439801Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.075941ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.655997716Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.65739966Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.402994ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.663028816Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.664332169Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.303613ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.668531851Z level=info msg="Executing migration" id="copy api_key v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.669109857Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=579.166µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.672190757Z level=info msg="Executing migration" id="Drop old table api_key_v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.672845314Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=654.427µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.678070706Z level=info msg="Executing migration" id="Update api_key table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.678117596Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=49.91µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.682229117Z level=info msg="Executing migration" id="Add expires to api_key table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.686426109Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.196762ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.69250717Z level=info msg="Executing migration" id="Add service account foreign key" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.694775402Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.267952ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.699166636Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.699325847Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=159.321µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.708701301Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.712865302Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.163492ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.717573569Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.72173265Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.158431ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.729456857Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.730215445Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=756.078µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.735830241Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.736689429Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=858.988µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.73979012Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.740552687Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=762.287µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.743432696Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.744223244Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=789.038µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.749793739Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.751184423Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.389954ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.755032702Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.756398255Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.366543ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.761690238Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.76190256Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=216.072µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.765771318Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.76598663Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=216.102µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.770552086Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.773839739Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.286333ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.78000123Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.78302068Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.01901ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.785816148Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.785907708Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=92.27µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.78908382Z level=info msg="Executing migration" id="create quota table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.79009481Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.01049ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.795560595Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.796508814Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=948.009µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.799535914Z level=info msg="Executing migration" id="Update quota table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.799690056Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=153.552µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.802441863Z level=info msg="Executing migration" id="create plugin_setting table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.803665905Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.220822ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.80914046Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.810038619Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=897.819µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.812966488Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.8161481Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.180932ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.819319611Z level=info msg="Executing migration" id="Update plugin_setting table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.819459262Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=136.541µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.822511743Z level=info msg="Executing migration" id="update NULL org_id to 1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.822961697Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=449.504µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.828537593Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.842072057Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=13.535134ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.84835599Z level=info msg="Executing migration" id="create session table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.849010376Z level=info msg="Migration successfully executed" id="create session table" duration=653.956µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.853040947Z level=info msg="Executing migration" id="Drop old table playlist table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.853250959Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=210.322µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.859289979Z level=info msg="Executing migration" id="Drop old table playlist_item table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.859495981Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=205.662µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.86748057Z level=info msg="Executing migration" id="create playlist table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.869251428Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.770718ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.873579601Z level=info msg="Executing migration" id="create playlist item table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.874629401Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.05008ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.913827911Z level=info msg="Executing migration" id="Update playlist table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.914102424Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=270.983µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.922552238Z level=info msg="Executing migration" id="Update playlist_item table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.922580368Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=29.77µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.926865001Z level=info msg="Executing migration" id="Add playlist column created_at" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.930368836Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.503175ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.933646488Z level=info msg="Executing migration" id="Add playlist column updated_at" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.93682895Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.182092ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.943012842Z level=info msg="Executing migration" id="drop preferences table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.943132223Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=119.941µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.948362455Z level=info msg="Executing migration" id="drop preferences table v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.948465346Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=103.181µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.954135612Z level=info msg="Executing migration" id="create preferences table v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.955021001Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=885.169µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.960880899Z level=info msg="Executing migration" id="Update preferences table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.960899389Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=18.79µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.962760488Z level=info msg="Executing migration" id="Add column team_id in preferences" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.965364844Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.627056ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.968354384Z level=info msg="Executing migration" id="Update team_id column values in preferences" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.968528405Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=173.631µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.970899409Z level=info msg="Executing migration" id="Add column week_start in preferences" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.973328593Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.428334ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.98108207Z level=info msg="Executing migration" id="Add column preferences.json_data" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.984253742Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.171192ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.987246722Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.987262832Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=16.97µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.990528844Z level=info msg="Executing migration" id="Add preferences index org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.991365143Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=836.009µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.997853247Z level=info msg="Executing migration" id="Add preferences index user_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:54.999338002Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.484465ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.002691775Z level=info msg="Executing migration" id="create alert table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.003707235Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.01507ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.00726417Z level=info msg="Executing migration" id="add index alert org_id & id " 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.00832955Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.06385ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.011411988Z level=info msg="Executing migration" id="add index alert state" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.012426078Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.01262ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.017781338Z level=info msg="Executing migration" id="add index alert dashboard_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.018600456Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=818.878µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.025498789Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.026163816Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=664.467µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.029713189Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.030581327Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=867.749µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.035034178Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.035887336Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=921.379µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.048818786Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.059014821Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.193765ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.06433309Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.06539331Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.06075ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.069580579Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.070472707Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=891.928µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.076348152Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.076632864Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=284.752µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.080105777Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.080663532Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=557.325µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.086298394Z level=info msg="Executing migration" id="create alert_notification table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.087097602Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=802.948µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.0901626Z level=info msg="Executing migration" id="Add column is_default" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.094170657Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.007447ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.097422388Z level=info msg="Executing migration" id="Add column frequency" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.101432815Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.009867ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.109282998Z level=info msg="Executing migration" id="Add column send_reminder" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.113102553Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.819515ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.121153788Z level=info msg="Executing migration" id="Add column disable_resolve_message" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.123729542Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.575604ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.127046043Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.127920731Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=874.338µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.132413352Z level=info msg="Executing migration" id="Update alert table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.132440053Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.131µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.13539841Z level=info msg="Executing migration" id="Update alert_notification table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.135431621Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=33.951µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.13965025Z level=info msg="Executing migration" id="create notification_journal table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.140536618Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=885.698µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.147004458Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.147929127Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=924.449µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.152346388Z level=info msg="Executing migration" id="drop alert_notification_journal" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.153101965Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=755.367µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.157233313Z level=info msg="Executing migration" id="create alert_notification_state table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.15804049Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=809.997µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.162398311Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.163802814Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.403673ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.167522648Z level=info msg="Executing migration" id="Add for to alert table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.172341793Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.819405ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.176710554Z level=info msg="Executing migration" id="Add column uid in alert_notification" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.180888953Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.175589ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.185846149Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.186035321Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=189.072µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.190417371Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.19135346Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=935.159µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.19563958Z level=info msg="Executing migration" id="Remove unique index org_id_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.196943312Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.302862ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.212532426Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.218641623Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.108297ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.221872093Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.221890744Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=18.55µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.225310255Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.226209064Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=898.489µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.231338651Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.232172759Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=833.498µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.235653631Z level=info msg="Executing migration" id="Drop old annotation table v4" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.235772493Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=119.372µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.239075133Z level=info msg="Executing migration" id="create annotation table v5" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.240490066Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.414333ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.244971768Z level=info msg="Executing migration" id="add index annotation 0 v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.245878426Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=906.298µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.250809332Z level=info msg="Executing migration" id="add index annotation 1 v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.252059314Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.248992ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.255547956Z level=info msg="Executing migration" id="add index annotation 2 v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.256923739Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.375273ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.261682863Z level=info msg="Executing migration" id="add index annotation 3 v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.262714603Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.0323ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.266813451Z level=info msg="Executing migration" id="add index annotation 4 v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.267704889Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=891.158µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.27103032Z level=info msg="Executing migration" id="Update annotation table charset" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.27105845Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.71µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.2753495Z level=info msg="Executing migration" id="Add column region_id to annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.281878741Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.528801ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.286426563Z level=info msg="Executing migration" id="Drop category_id index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.287065749Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=641.386µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.290337899Z level=info msg="Executing migration" id="Add column tags to annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.296217664Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.877485ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.300437923Z level=info msg="Executing migration" id="Create annotation_tag table v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.301073809Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=635.656µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.304258748Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.305190057Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=931.199µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.308595909Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.309374356Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=777.827µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.313596825Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.325402375Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.80581ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.329676974Z level=info msg="Executing migration" id="Create annotation_tag table v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.330167459Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=489.815µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.334476349Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.335427768Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=950.249µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.343332631Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.343756635Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=423.304µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.348353568Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.349150765Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=808.777µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.379290195Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.379596838Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=306.873µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.38518103Z level=info msg="Executing migration" id="Add created time to annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.392010003Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.837543ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.395314734Z level=info msg="Executing migration" id="Add updated time to annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.398214191Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.899397ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.401667023Z level=info msg="Executing migration" id="Add index for created in annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.40243258Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=764.487µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.407081304Z level=info msg="Executing migration" id="Add index for updated in annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.408041742Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=960.098µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.411334983Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.411608866Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=273.723µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.416063177Z level=info msg="Executing migration" id="Add epoch_end column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.420606279Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.542602ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.424156262Z level=info msg="Executing migration" id="Add index for epoch_end" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.42505674Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=899.918µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.429085218Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.429254619Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=169.171µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.432637291Z level=info msg="Executing migration" id="Move region to single row" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.433029414Z level=info msg="Migration successfully executed" id="Move region to single row" duration=392.433µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.436567817Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.437802629Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.233862ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.44227378Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.443554642Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.280502ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.446974694Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.447939453Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=963.809µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.45196455Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.45298445Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.02269ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.457030948Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.45842101Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.381693ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.462620959Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.464546297Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.924138ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.470607273Z level=info msg="Executing migration" id="Increase tags column to length 4096" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.470649054Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=42.271µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.473544241Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.473580721Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=36.94µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.589314266Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.589882141Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=571.265µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.673667549Z level=info msg="Executing migration" id="create test_data table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.67591844Z level=info msg="Migration successfully executed" id="create test_data table" duration=2.250561ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.753889854Z level=info msg="Executing migration" id="create dashboard_version table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.75552557Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.637656ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.819223121Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.820672454Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.450703ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.847930128Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.849466402Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.536444ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.853185377Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.853385719Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=200.412µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.858119482Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.858489586Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=373.404µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.861449303Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.861468443Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=18.76µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.864489911Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.869131215Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.643204ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.871921911Z level=info msg="Executing migration" id="create team table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.872699908Z level=info msg="Migration successfully executed" id="create team table" duration=777.748µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.877822635Z level=info msg="Executing migration" id="add index team.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.878767384Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=944.379µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.883013554Z level=info msg="Executing migration" id="add unique index team_org_id_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.884007963Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=993.719µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.889448393Z level=info msg="Executing migration" id="Add column uid in team" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.897605119Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=8.148796ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.902675876Z level=info msg="Executing migration" id="Update uid column values in team" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.902963469Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=287.953µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.907351909Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.908835033Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.482674ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.9138869Z level=info msg="Executing migration" id="Add column external_uid in team" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.920395761Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=6.507591ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.924228206Z level=info msg="Executing migration" id="Add column is_provisioned in team" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.928848119Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.619103ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.931955888Z level=info msg="Executing migration" id="create team member table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.932766716Z level=info msg="Migration successfully executed" id="create team member table" duration=810.998µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.954875571Z level=info msg="Executing migration" id="add index team_member.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.956364465Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.490104ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.96123588Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.962144228Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=908.078µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.96548556Z level=info msg="Executing migration" id="add index team_member.team_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.966465459Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=979.489µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.973673446Z level=info msg="Executing migration" id="Add column email to team table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.978505021Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.831195ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.98172167Z level=info msg="Executing migration" id="Add column external to team_member table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.987380003Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.657783ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.991776314Z level=info msg="Executing migration" id="Add column permission to team_member table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.996704689Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.927706ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:55.999935269Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.000863238Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=927.709µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.005960295Z level=info msg="Executing migration" id="create dashboard acl table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.006904864Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=946.289µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.009995794Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.011144166Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.147982ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.015386976Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.017634818Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=2.247232ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.022329303Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.023332953Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.0032ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.026444002Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.027519833Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.075451ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.030657773Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.031675103Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.01621ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.035873703Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.036895303Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.02124ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.042177834Z level=info msg="Executing migration" id="add index dashboard_permission" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.044445986Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=2.256352ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.047798589Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.048367654Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=569.175µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.052433563Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.052696906Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=256.603µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.055330301Z level=info msg="Executing migration" id="create tag table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.056170609Z level=info msg="Migration successfully executed" id="create tag table" duration=838.558µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.06041927Z level=info msg="Executing migration" id="add index tag.key_value" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.061576131Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.156121ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.065508519Z level=info msg="Executing migration" id="create login attempt table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.066303907Z level=info msg="Migration successfully executed" id="create login attempt table" duration=795.838µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.069474707Z level=info msg="Executing migration" id="add index login_attempt.username" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.071730669Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=2.255662ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.083000678Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.084687064Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.689406ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.093121555Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.106964909Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.842684ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.109950357Z level=info msg="Executing migration" id="create login_attempt v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.110503663Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=553.076µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.114601632Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.115513821Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=912.119µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.122395207Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.122992133Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=599.226µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.12574281Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.126493467Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=750.287µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.129376375Z level=info msg="Executing migration" id="create user auth table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.130278064Z level=info msg="Migration successfully executed" id="create user auth table" duration=900.839µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.136571494Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.137933757Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.346773ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.14138079Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.141400711Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=20.841µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.146263507Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.151418377Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.15485ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.155522037Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.159276583Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.754186ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.163413933Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.169423061Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.008258ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.174680921Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.180622399Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.943838ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.184142723Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.185206633Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.06352ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.190245581Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.195734774Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.488673ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.214079091Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.221617624Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=7.538853ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.224609423Z level=info msg="Executing migration" id="create server_lock table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.225324039Z level=info msg="Migration successfully executed" id="create server_lock table" duration=714.346µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.230492589Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.231484899Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=992.33µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.234648279Z level=info msg="Executing migration" id="create user auth token table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.236377196Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.716517ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.23987636Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.242105471Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.229521ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.247670225Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.248726925Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.05641ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.251672463Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.252683203Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.01045ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.255623462Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.261263316Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.639154ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.26683883Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.268027641Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.165461ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.271303013Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.277713124Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=6.425521ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.280589592Z level=info msg="Executing migration" id="create cache_data table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.28142313Z level=info msg="Migration successfully executed" id="create cache_data table" duration=833.208µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.287723841Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.28870004Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=975.779µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.292076713Z level=info msg="Executing migration" id="create short_url table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.293354805Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.277622ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.296715288Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.299107011Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.384993ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.303839536Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.303874616Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=35.9µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.30843161Z level=info msg="Executing migration" id="delete alert_definition table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.308534421Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=110.611µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.311336049Z level=info msg="Executing migration" id="recreate alert_definition table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.312263397Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=927.008µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.315335657Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.316377087Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.04084ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.32085083Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.321935601Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.088601ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.325087821Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.325249463Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=162.012µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.328172571Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.329479993Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.307002ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.334173049Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.33535818Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.185011ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.338080816Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.339408509Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.322483ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.344502788Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.346539168Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=2.03533ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.350509426Z level=info msg="Executing migration" id="Add column paused in alert_definition" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.355000039Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.490283ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.357930807Z level=info msg="Executing migration" id="drop alert_definition table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.358675675Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=744.378µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.363273759Z level=info msg="Executing migration" id="delete alert_definition_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.363545592Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=270.953µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.366907014Z level=info msg="Executing migration" id="recreate alert_definition_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.368378388Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.470674ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.371717921Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.372997153Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.279272ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.378278314Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.379620247Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.349043ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.382856308Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.382885508Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=29.85µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.392330439Z level=info msg="Executing migration" id="drop alert_definition_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.393261408Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=930.559µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.397508079Z level=info msg="Executing migration" id="create alert_instance table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.399072454Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.563545ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.402274745Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.40376416Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.489665ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.408390314Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.409693216Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.302152ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.412705455Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.418632453Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.926398ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.421544571Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.422313348Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=768.677µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.427757871Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.429346416Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.588365ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.432571367Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.460192593Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.618666ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.464622006Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.488750019Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=24.124043ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.494670625Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.495475133Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=804.468µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.498383451Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.499066828Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=682.837µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.501923395Z level=info msg="Executing migration" id="add current_reason column related to current_state" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.507893963Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.970068ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.512280395Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.518131742Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.850737ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.523314102Z level=info msg="Executing migration" id="create alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.524569994Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.256262ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.528686103Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.529914665Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.228332ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.533346368Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.534421989Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.069021ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.540142404Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.540909581Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=766.977µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.543691278Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.543705488Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=14.79µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.546635146Z level=info msg="Executing migration" id="add column for to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.555852465Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.218519ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.561989625Z level=info msg="Executing migration" id="add column annotations to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.569871311Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.882566ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.573055081Z level=info msg="Executing migration" id="add column labels to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.580780255Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.723694ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.58436384Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.585227948Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=863.888µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.592377717Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.5936321Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.257943ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.597253105Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.606632675Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.38019ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.610522682Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.616706832Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.18365ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.622276096Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.62372107Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.451184ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.628734578Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.635663074Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.927286ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.641482301Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.649567869Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=8.085678ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.656911859Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.65693375Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=23.231µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.660647695Z level=info msg="Executing migration" id="create alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.662092169Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.444534ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.665194529Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.666260659Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.05747ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.671976205Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.673006315Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.02958ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.675963863Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.676005344Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=42.141µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.678286726Z level=info msg="Executing migration" id="add column for to alert_rule_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.684878739Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.585173ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.687899838Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.69435403Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.453982ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.700811413Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.708688379Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.875496ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.713574955Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.720893486Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.321171ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.72539896Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.730409098Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.009278ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.733893182Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.733911082Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=18.21µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.738083772Z level=info msg="Executing migration" id=create_alert_configuration_table 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.739134272Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.05044ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.744527714Z level=info msg="Executing migration" id="Add column default in alert_configuration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.754141387Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.614263ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.75866687Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.75868126Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=14.87µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.761915392Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.771449023Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.532981ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.774638554Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.775393121Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=754.397µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.782389399Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.788844681Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.463472ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.792252534Z level=info msg="Executing migration" id=create_ngalert_configuration_table 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.79285909Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=602.426µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.798459884Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.8001657Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.668536ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.806437781Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.813022274Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.585123ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.816811611Z level=info msg="Executing migration" id="create provenance_type table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.81772213Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=910.399µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.821237764Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.822323014Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.084891ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.827009879Z level=info msg="Executing migration" id="create alert_image table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.827829747Z level=info msg="Migration successfully executed" id="create alert_image table" duration=819.698µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.837718832Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.839458619Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.739857ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.843038604Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.843055094Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=17.15µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.846464727Z level=info msg="Executing migration" id=create_alert_configuration_history_table 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.848120963Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.655696ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.856622134Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.85825996Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.628846ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.864069196Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.864537901Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.869828132Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.870317236Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=488.694µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.874460496Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.875979931Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.522425ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.879203632Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.886640124Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.435022ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.889645893Z level=info msg="Executing migration" id="create library_element table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.890541361Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=895.228µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.896991774Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.898198535Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.207111ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.901480047Z level=info msg="Executing migration" id="create library_element_connection table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.902535727Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.06177ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.905798579Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.906703187Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=904.868µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.912632624Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.915114588Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=2.484624ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.918923755Z level=info msg="Executing migration" id="increase max description length to 2048" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.918974585Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=60.58µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.923656111Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.923675781Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=20.03µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.927650209Z level=info msg="Executing migration" id="add library_element folder uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.939839636Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=12.173237ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.944522492Z level=info msg="Executing migration" id="populate library_element folder_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.944879955Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=357.103µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.947169637Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.948272508Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.102601ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.952030344Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.952391898Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=360.983µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.95783033Z level=info msg="Executing migration" id="create data_keys table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.958923781Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.092841ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.961662327Z level=info msg="Executing migration" id="create secrets table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.962564216Z level=info msg="Migration successfully executed" id="create secrets table" duration=901.699µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:56.965528594Z level=info msg="Executing migration" id="rename data_keys name column to id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.000858025Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=35.32648ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.004471269Z level=info msg="Executing migration" id="add name column into data_keys" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.010008424Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.536205ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.012911453Z level=info msg="Executing migration" id="copy data_keys id column values into name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.013091085Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=179.682µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.017537328Z level=info msg="Executing migration" id="rename data_keys name column to label" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.050154699Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.610381ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.053517952Z level=info msg="Executing migration" id="rename data_keys id column back to name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.089047311Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=35.523309ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.092489895Z level=info msg="Executing migration" id="create kv_store table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.093546195Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.05554ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.098199582Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.099393323Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.193562ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.102194951Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.102516614Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=321.023µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.105632615Z level=info msg="Executing migration" id="create permission table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.106480223Z level=info msg="Migration successfully executed" id="create permission table" duration=847.048µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.113219319Z level=info msg="Executing migration" id="add unique index permission.role_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.11431604Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.096631ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.117620402Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.118751983Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.131371ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.122860854Z level=info msg="Executing migration" id="create role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.123882764Z level=info msg="Migration successfully executed" id="create role table" duration=1.02127ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.129354868Z level=info msg="Executing migration" id="add column display_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.137834201Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.478703ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.140975052Z level=info msg="Executing migration" id="add column group_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.146286944Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.311492ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.150358114Z level=info msg="Executing migration" id="add index role.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.151538866Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.176902ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.154674487Z level=info msg="Executing migration" id="add unique index role_org_id_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.1559865Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.311423ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.159297592Z level=info msg="Executing migration" id="add index role_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.160530634Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.232662ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.16416932Z level=info msg="Executing migration" id="create team role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.16515364Z level=info msg="Migration successfully executed" id="create team role table" duration=988.11µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.17126184Z level=info msg="Executing migration" id="add index team_role.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.172329201Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.067331ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.175705414Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.176752204Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.04656ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.180976936Z level=info msg="Executing migration" id="add index team_role.team_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.181984226Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.01549ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.185969635Z level=info msg="Executing migration" id="create user role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.186804733Z level=info msg="Migration successfully executed" id="create user role table" duration=834.838µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.190495349Z level=info msg="Executing migration" id="add index user_role.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.1915628Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.067231ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.196324097Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.202089363Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=5.762506ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.207106653Z level=info msg="Executing migration" id="add index user_role.user_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.208366525Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.260682ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.212552306Z level=info msg="Executing migration" id="create builtin role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.213668737Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.116571ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.216714507Z level=info msg="Executing migration" id="add index builtin_role.role_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.217928769Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.213702ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.222707866Z level=info msg="Executing migration" id="add index builtin_role.name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.224019649Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.310943ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.228849887Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.24038827Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.548834ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.243467811Z level=info msg="Executing migration" id="add index builtin_role.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.244361169Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=893.018µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.248263628Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.249331718Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.0667ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.253036155Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.254912033Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.875688ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.259336457Z level=info msg="Executing migration" id="add unique index role.uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.2606829Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.338113ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.265943702Z level=info msg="Executing migration" id="create seed assignment table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.267321645Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.369803ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.27088254Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.271959361Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.076501ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.286032789Z level=info msg="Executing migration" id="add column hidden to role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.298820735Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=12.787416ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.303904625Z level=info msg="Executing migration" id="permission kind migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.312168076Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.259631ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.315843163Z level=info msg="Executing migration" id="permission attribute migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.323650519Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.798036ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.327173764Z level=info msg="Executing migration" id="permission identifier migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.338513865Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=11.338951ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.345230142Z level=info msg="Executing migration" id="add permission identifier index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.346764257Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.537405ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.351624735Z level=info msg="Executing migration" id="add permission action scope role_id index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.352835816Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.210271ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.357417451Z level=info msg="Executing migration" id="remove permission role_id action scope index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.359103298Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.684947ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.362290029Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.372113666Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=9.823047ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.377027905Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.377854913Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=820.098µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.381116705Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.381836182Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=719.017µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.384622719Z level=info msg="Executing migration" id="create query_history table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.385550528Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=927.419µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.389968552Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.391097493Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.128521ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.396242774Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.396261264Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=18.84µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.399322834Z level=info msg="Executing migration" id="create query_history_details table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.400237343Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=914.179µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.406974569Z level=info msg="Executing migration" id="rbac disabled migrator" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.407043Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=68.801µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.410827007Z level=info msg="Executing migration" id="teams permissions migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.411307022Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=479.435µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.41418399Z level=info msg="Executing migration" id="dashboard permissions" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.414875907Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=692.547µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.420401701Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.42126454Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=862.689µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.425213838Z level=info msg="Executing migration" id="drop managed folder create actions" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.425451901Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=245.813µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.430608851Z level=info msg="Executing migration" id="alerting notification permissions" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.431157997Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=549.766µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.43656483Z level=info msg="Executing migration" id="create query_history_star table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.437446519Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=881.569µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.443288396Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.445107334Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.818748ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.451906621Z level=info msg="Executing migration" id="add column org_id in query_history_star" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.4578549Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.947749ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.461399095Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.461438745Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=43.59µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.465177282Z level=info msg="Executing migration" id="create correlation table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.466480284Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.302482ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.470907578Z level=info msg="Executing migration" id="add index correlations.uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.472005669Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.097891ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.476394662Z level=info msg="Executing migration" id="add index correlations.source_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.477607654Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.213122ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.482855945Z level=info msg="Executing migration" id="add correlation config column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.491080426Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.218641ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.494597791Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.495388829Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=790.598µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.508053964Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.509176995Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.124851ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.513002752Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.531378683Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=18.374821ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.535425683Z level=info msg="Executing migration" id="create correlation v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.53620758Z level=info msg="Migration successfully executed" id="create correlation v2" duration=781.567µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.540446882Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.541630114Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.182912ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.547518142Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.549464781Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.946399ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.553135997Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.554644012Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.508325ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.559418309Z level=info msg="Executing migration" id="copy correlation v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.559711732Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=293.153µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.564137065Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.564933153Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=795.798µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.569501028Z level=info msg="Executing migration" id="add provisioning column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.581048432Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.546874ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.585394154Z level=info msg="Executing migration" id="add type column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.593850668Z level=info msg="Migration successfully executed" id="add type column" duration=8.455464ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.599111749Z level=info msg="Executing migration" id="create entity_events table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.599741976Z level=info msg="Migration successfully executed" id="create entity_events table" duration=629.857µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.604205239Z level=info msg="Executing migration" id="create dashboard public config v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.605872776Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.666967ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.611085337Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.611534791Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.614981375Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.615423689Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.62061075Z level=info msg="Executing migration" id="Drop old dashboard public config table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.621793952Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.182821ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.625509018Z level=info msg="Executing migration" id="recreate dashboard public config v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.627281985Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.773117ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.63685547Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.638673038Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.822448ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.64600943Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.647244552Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.237332ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.650248692Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.651167041Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=918.319µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.655594944Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.657091489Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.499815ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.660404061Z level=info msg="Executing migration" id="Drop public config table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.662912576Z level=info msg="Migration successfully executed" id="Drop public config table" duration=2.507165ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.667459561Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.668642562Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.182911ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.671593161Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.673554071Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.95968ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.677484479Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.67856281Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.077821ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.684071454Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.685275306Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.202512ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.691619188Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.715018159Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.398931ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.718566444Z level=info msg="Executing migration" id="add annotations_enabled column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.725244019Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.677025ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.734593151Z level=info msg="Executing migration" id="add time_selection_enabled column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.745028454Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=10.432733ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.753340496Z level=info msg="Executing migration" id="delete orphaned public dashboards" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.753899251Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=561.635µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.757366625Z level=info msg="Executing migration" id="add share column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.76400257Z level=info msg="Migration successfully executed" id="add share column" duration=6.635515ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.770401214Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.770603946Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=202.342µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.775979678Z level=info msg="Executing migration" id="create file table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.776744936Z level=info msg="Migration successfully executed" id="create file table" duration=764.958µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.780677615Z level=info msg="Executing migration" id="file table idx: path natural pk" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.781728505Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.05057ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.784647144Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.785584823Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=937.409µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.790808984Z level=info msg="Executing migration" id="create file_meta table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.791471461Z level=info msg="Migration successfully executed" id="create file_meta table" duration=662.437µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.795192528Z level=info msg="Executing migration" id="file table idx: path key" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.796095086Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=902.168µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.800233827Z level=info msg="Executing migration" id="set path collation in file table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.800248707Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=15.07µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.80861903Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.8086346Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=15.57µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.810966633Z level=info msg="Executing migration" id="managed permissions migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.811548868Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=583.195µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.814601918Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.814853071Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=242.223µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.818656638Z level=info msg="Executing migration" id="RBAC action name migrator" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.81985279Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.196172ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.82595538Z level=info msg="Executing migration" id="Add UID column to playlist" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.832648226Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.692536ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.835567285Z level=info msg="Executing migration" id="Update uid column values in playlist" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.835787377Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=218.712µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.838621245Z level=info msg="Executing migration" id="Add index for uid in playlist" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.839652545Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.03115ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.842480913Z level=info msg="Executing migration" id="update group index for alert rules" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.842866746Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=385.283µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.8503175Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.850524962Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=207.152µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.85340676Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.853814454Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=407.444µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.856660172Z level=info msg="Executing migration" id="add action column to seed_assignment" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.863376248Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.715476ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.86658308Z level=info msg="Executing migration" id="add scope column to seed_assignment" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.877282965Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.699565ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.88288172Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.884045152Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.163172ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.88689962Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.96005527Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.15454ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.963877457Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.964863027Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=985.1µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.969703644Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.971619963Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.915899ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:57.974804065Z level=info msg="Executing migration" id="add primary key to seed_assigment" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.001235965Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.43117ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.007643947Z level=info msg="Executing migration" id="add origin column to seed_assignment" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.019102907Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=11.45857ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.022967084Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.023201746Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=234.692µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.026369837Z level=info msg="Executing migration" id="prevent seeding OnCall access" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.026840361Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=473.455µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.02990798Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.030370505Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=461.655µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.036874297Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.037094039Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=219.362µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.039148908Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.039371501Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=222.122µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.042224228Z level=info msg="Executing migration" id="create folder table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.043102546Z level=info msg="Migration successfully executed" id="create folder table" duration=877.908µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.045959153Z level=info msg="Executing migration" id="Add index for parent_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.047028234Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.068811ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.053869119Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.054749867Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=880.558µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.057650075Z level=info msg="Executing migration" id="Update folder title length" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.057670155Z level=info msg="Migration successfully executed" id="Update folder title length" duration=20.85µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.06137548Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.062272989Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=903.319µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.069590989Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.070481928Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=890.169µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.073341705Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.074246914Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=904.609µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.0769912Z level=info msg="Executing migration" id="Sync dashboard and folder table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.077424214Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=432.944µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.083588623Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.083858425Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=261.432µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.086658192Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.087625531Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=971.739µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.090506549Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.091410857Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=904.108µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.096628877Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.097515685Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=886.528µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.10007563Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.100980029Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=904.089µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.104378551Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.10527964Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=900.819µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.110773882Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.11163015Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=858.108µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.114867111Z level=info msg="Executing migration" id="create anon_device table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.115562458Z level=info msg="Migration successfully executed" id="create anon_device table" duration=695.057µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.121154831Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.122022549Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=867.548µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.124925717Z level=info msg="Executing migration" id="add index anon_device.updated_at" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.125840886Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=914.959µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.128740044Z level=info msg="Executing migration" id="create signing_key table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.12944372Z level=info msg="Migration successfully executed" id="create signing_key table" duration=703.356µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.135048094Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.135908772Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=860.498µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.139037612Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.140013721Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=981.069µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.14307817Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.143439544Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=361.754µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.148635853Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.155540779Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.904546ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.158649109Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.159244715Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=600.406µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.1650786Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.16509364Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=20.49µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.170414991Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.171445531Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.03023ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.174772973Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.174786513Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=14.11µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.185699017Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.186686167Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=986.76µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.19126405Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.192119189Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=855.009µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.195289699Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.196112377Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=822.738µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.199194736Z level=info msg="Executing migration" id="create sso_setting table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.199986434Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=791.418µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.203897841Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.204542457Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=645.166µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.207664167Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.207908969Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=242.742µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.21112036Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.211631525Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=511.205µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.214619533Z level=info msg="Executing migration" id="create cloud_migration table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.21531932Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=699.237µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.221559209Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.223092824Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.536085ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.227229833Z level=info msg="Executing migration" id="add stack_id column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.237188019Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.954816ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.243838422Z level=info msg="Executing migration" id="add region_slug column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.254344643Z level=info msg="Migration successfully executed" id="add region_slug column" duration=10.50593ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.25719781Z level=info msg="Executing migration" id="add cluster_slug column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.266992933Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.793713ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.271617647Z level=info msg="Executing migration" id="add migration uid column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.281416711Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.797014ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.286750222Z level=info msg="Executing migration" id="Update uid column values for migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.286944193Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=194.111µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.290598218Z level=info msg="Executing migration" id="Add unique index migration_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.292477046Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.878438ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.298252562Z level=info msg="Executing migration" id="add migration run uid column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.310569319Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=12.311347ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.314307095Z level=info msg="Executing migration" id="Update uid column values for migration run" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.314518597Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=211.762µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.318146131Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.319438214Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.292633ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.325911036Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.351400559Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.486144ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.355346547Z level=info msg="Executing migration" id="create cloud_migration_session v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.356330896Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=984.449µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.371413Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.373126626Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.716526ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.386706586Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.387512113Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=802.187µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.390963016Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.392059287Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.095901ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.40180347Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.43536822Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=33.56335ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.439918904Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.440813712Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=895.338µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.445239304Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.446277274Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.04607ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.450548765Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.450861048Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=312.653µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.453292461Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.454125329Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=832.178µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.456990577Z level=info msg="Executing migration" id="add snapshot upload_url column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.46679181Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=9.797963ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.472640996Z level=info msg="Executing migration" id="add snapshot status column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.487042884Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=14.401508ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.502061257Z level=info msg="Executing migration" id="add snapshot local_directory column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.509449457Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=7.38898ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.512022542Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.520314161Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=8.290669ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.525692773Z level=info msg="Executing migration" id="add snapshot encryption_key column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.535164413Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.47091ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.538129801Z level=info msg="Executing migration" id="add snapshot error_string column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.547692612Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=9.562291ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.551186406Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.551882662Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=695.886µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.557228143Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.593652111Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.423708ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.596546549Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.605479604Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=8.942215ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.608696195Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.617115925Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=8.41886ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.629798796Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.642238635Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=12.440169ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.646124712Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.653082048Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=6.965936ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.661460078Z level=info msg="Executing migration" id="increase resource_uid column length" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.661486749Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=28.061µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.664527228Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.664551298Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=25µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.669527045Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.680564501Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.038566ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.683776971Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.692885289Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.107338ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.700085627Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.70040639Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=320.613µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.706543829Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.706964663Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=420.854µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.712519946Z level=info msg="Executing migration" id="add record column to alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.724133397Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=11.613741ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.72761216Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.735167632Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.554532ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.739506383Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.749204696Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.698333ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.755522336Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.767310409Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=11.786883ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.770051325Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.770466499Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=415.144µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.773466398Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.782490364Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.024097ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.787945876Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.797366875Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.415059ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.802416674Z level=info msg="Executing migration" id="delete orphaned service account permissions" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.802592655Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=175.601µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.805590164Z level=info msg="Executing migration" id="adding action set permissions" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.806326101Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=745.417µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.812186847Z level=info msg="Executing migration" id="create user_external_session table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.813940524Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.753397ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.819053963Z level=info msg="Executing migration" id="increase name_id column length to 1024" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.819091393Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=38.6µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.822778538Z level=info msg="Executing migration" id="increase session_id column length to 1024" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.822794508Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=16.71µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.826020999Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.826406293Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=384.834µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.831246459Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.844506085Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=13.261006ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.848070729Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.857067985Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=8.996136ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.861138874Z level=info msg="Executing migration" id="add alert_rule_state table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.861903562Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=764.438µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.869584725Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.871865737Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.279922ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.875232299Z level=info msg="Executing migration" id="add guid column to alert_rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.885256595Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.023826ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.889558926Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.899223148Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.664092ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.903928683Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.903945013Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.904102514Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.904113114Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=184.911µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.906882561Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.907422566Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=539.495µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.910471845Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.911556585Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.08471ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.919100358Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.920289559Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.188891ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.92354641Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.925549669Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.050189ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.928666929Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.930339645Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.673266ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.933237163Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.943150397Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=9.912454ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.949060814Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.958945548Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.884064ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.962107978Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.971432817Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.332169ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.976673907Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.986122457Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.44741ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.991301907Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.991507289Z level=info msg="Removed 0 datasources:drilldown permissions" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.991520179Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=218.692µs 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.995457996Z level=info msg="Executing migration" id="remove title in folder unique index" 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:58.9979619Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=2.506854ms 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:59.002874937Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.500928894s 09:27:20 grafana | logger=migrator t=2025-06-19T09:20:59.003511213Z level=info msg="Unlocking database" 09:27:20 grafana | logger=sqlstore t=2025-06-19T09:20:59.018417275Z level=info msg="Created default admin" user=admin 09:27:20 grafana | logger=sqlstore t=2025-06-19T09:20:59.018687738Z level=info msg="Created default organization" 09:27:20 grafana | logger=secrets t=2025-06-19T09:20:59.025009648Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 09:27:20 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T09:20:59.122816682Z level=info msg="Restored cache from database" duration=589.546µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.131207372Z level=info msg="Locking database" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.131222612Z level=info msg="Starting DB migrations" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.139155367Z level=info msg="Executing migration" id="create resource_migration_log table" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.140136637Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=980.93µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.14360676Z level=info msg="Executing migration" id="Initialize resource tables" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.1436211Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=14.73µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.147922721Z level=info msg="Executing migration" id="drop table resource" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.148025702Z level=info msg="Migration successfully executed" id="drop table resource" duration=103.061µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.151267983Z level=info msg="Executing migration" id="create table resource" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.152340963Z level=info msg="Migration successfully executed" id="create table resource" duration=1.07289ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.155426533Z level=info msg="Executing migration" id="create table resource, index: 0" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.156646814Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.220031ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.160863344Z level=info msg="Executing migration" id="drop table resource_history" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.160944935Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=82.201µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.16562245Z level=info msg="Executing migration" id="create table resource_history" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.166957492Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.335272ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.171476176Z level=info msg="Executing migration" id="create table resource_history, index: 0" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.172802558Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.326222ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.177098949Z level=info msg="Executing migration" id="create table resource_history, index: 1" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.178273231Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.173762ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.181763954Z level=info msg="Executing migration" id="drop table resource_version" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.181863095Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=101.201µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.185316398Z level=info msg="Executing migration" id="create table resource_version" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.186148556Z level=info msg="Migration successfully executed" id="create table resource_version" duration=831.918µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.19183151Z level=info msg="Executing migration" id="create table resource_version, index: 0" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.193294784Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.463204ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.197483104Z level=info msg="Executing migration" id="drop table resource_blob" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.197566945Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=83.881µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.200567213Z level=info msg="Executing migration" id="create table resource_blob" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.201748475Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.180892ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.204795434Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.206009035Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.212911ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.21171692Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.212930991Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.214001ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.217151022Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.231455718Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=14.303946ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.240367383Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.251320988Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=10.953864ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.558953623Z level=info msg="Executing migration" id="Add index to resource_history for polling" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.562071372Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=3.12023ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.568039809Z level=info msg="Executing migration" id="Add index to resource for loading" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.569436273Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.396273ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.573946606Z level=info msg="Executing migration" id="Add column folder in resource_history" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.58804177Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=14.095495ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.592049028Z level=info msg="Executing migration" id="Add column folder in resource" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.601269796Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=9.219178ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.607170242Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 09:27:20 grafana | logger=deletion-marker-migrator t=2025-06-19T09:20:59.607203223Z level=info msg="finding any deletion markers" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.607756748Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=589.806µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.611191671Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.612611534Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.419943ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.616719243Z level=info msg="Executing migration" id="Add generation to resource history" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.62788523Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.165367ms 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.631328833Z level=info msg="Executing migration" id="Add generation index to resource history" 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:20:59.632260442Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=931.269µs 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:21:22.021708224Z level=info msg="migrations completed" performed=26 skipped=0 duration=22.882588207s 09:27:20 grafana | logger=resource-migrator t=2025-06-19T09:21:22.02339574Z level=info msg="Unlocking database" 09:27:20 grafana | t=2025-06-19T09:21:22.023853054Z level=info caller=logger.go:214 time=2025-06-19T09:21:22.023827484Z msg="Using channel notifier" logger=sql-resource-server 09:27:20 grafana | logger=plugin.store t=2025-06-19T09:21:22.039885646Z level=info msg="Loading plugins..." 09:27:20 grafana | logger=plugins.registration t=2025-06-19T09:21:22.082856431Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 09:27:20 grafana | logger=plugins.initialization t=2025-06-19T09:21:22.082922242Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 09:27:20 grafana | logger=plugin.store t=2025-06-19T09:21:22.082966642Z level=info msg="Plugins loaded" count=53 duration=43.082756ms 09:27:20 grafana | logger=query_data t=2025-06-19T09:21:22.088112001Z level=info msg="Query Service initialization" 09:27:20 grafana | logger=live.push_http t=2025-06-19T09:21:22.092903036Z level=info msg="Live Push Gateway initialization" 09:27:20 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-19T09:21:22.119690799Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 09:27:20 grafana | logger=ngalert t=2025-06-19T09:21:22.128508722Z level=info msg="Using simple database alert instance store" 09:27:20 grafana | logger=ngalert.state.manager.persist t=2025-06-19T09:21:22.128565663Z level=info msg="Using sync state persister" 09:27:20 grafana | logger=infra.usagestats.collector t=2025-06-19T09:21:22.133073445Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 09:27:20 grafana | logger=ngalert.state.manager t=2025-06-19T09:21:22.135996773Z level=info msg="Warming state cache for startup" 09:27:20 grafana | logger=http.server t=2025-06-19T09:21:22.137271455Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 09:27:20 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-19T09:21:22.140372204Z level=info msg="Starting MultiOrg Alertmanager" 09:27:20 grafana | logger=grafanaStorageLogger t=2025-06-19T09:21:22.140423314Z level=info msg="Storage starting" 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:22.140695817Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 09:27:20 grafana | logger=ngalert.state.manager t=2025-06-19T09:21:22.257360248Z level=info msg="State cache has been initialized" states=0 duration=121.361565ms 09:27:20 grafana | logger=ngalert.scheduler t=2025-06-19T09:21:22.257428009Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 09:27:20 grafana | logger=ticker t=2025-06-19T09:21:22.257498649Z level=info msg=starting first_tick=2025-06-19T09:21:30Z 09:27:20 grafana | logger=provisioning.datasources t=2025-06-19T09:21:22.262267375Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 09:27:20 grafana | logger=provisioning.alerting t=2025-06-19T09:21:22.304399082Z level=info msg="starting to provision alerting" 09:27:20 grafana | logger=provisioning.alerting t=2025-06-19T09:21:22.304438542Z level=info msg="finished to provision alerting" 09:27:20 grafana | logger=provisioning.dashboard t=2025-06-19T09:21:22.306490462Z level=info msg="starting to provision dashboards" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.831700498Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.832447555Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.833081051Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.833699727Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.834826207Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.836466213Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.839071707Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.840037676Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 09:27:20 grafana | logger=grafana-apiserver t=2025-06-19T09:21:22.840922565Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 09:27:20 grafana | logger=app-registry t=2025-06-19T09:21:22.898743141Z level=info msg="app registry initialized" 09:27:20 grafana | logger=plugins.update.checker t=2025-06-19T09:21:23.320823803Z level=info msg="Update check succeeded" duration=1.187282834s 09:27:20 grafana | logger=grafana.update.checker t=2025-06-19T09:21:23.323129604Z level=info msg="Update check succeeded" duration=1.187859748s 09:27:20 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T09:21:23.380072091Z level=info msg="Patterns update finished" duration=1.213872173s 09:27:20 grafana | logger=provisioning.dashboard t=2025-06-19T09:21:23.449507596Z level=info msg="finished to provision dashboards" 09:27:20 grafana | logger=plugin.installer t=2025-06-19T09:21:23.939340717Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 09:27:20 grafana | logger=installer.fs t=2025-06-19T09:21:23.99578614Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 09:27:20 grafana | logger=plugins.registration t=2025-06-19T09:21:24.019814576Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:24.019839347Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=1.87912126s 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:24.019862427Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 09:27:20 grafana | logger=plugin.installer t=2025-06-19T09:21:24.238154095Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 09:27:20 grafana | logger=installer.fs t=2025-06-19T09:21:24.296380734Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 09:27:20 grafana | logger=plugins.registration t=2025-06-19T09:21:24.313200423Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:24.313223033Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=293.355116ms 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:24.313250653Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 09:27:20 grafana | logger=plugin.installer t=2025-06-19T09:21:24.604708072Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 09:27:20 grafana | logger=installer.fs t=2025-06-19T09:21:24.66072257Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 09:27:20 grafana | logger=plugins.registration t=2025-06-19T09:21:24.679087833Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:24.679115023Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=365.8596ms 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:24.679137943Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 09:27:20 grafana | logger=plugin.installer t=2025-06-19T09:21:25.086444214Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 09:27:20 grafana | logger=installer.fs t=2025-06-19T09:21:25.221209974Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 09:27:20 grafana | logger=plugins.registration t=2025-06-19T09:21:25.246702404Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 09:27:20 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:21:25.246753624Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=567.611361ms 09:27:20 grafana | logger=infra.usagestats t=2025-06-19T09:22:23.142866965Z level=info msg="Usage stats are ready to report" 09:27:20 kafka | ===> User 09:27:20 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:27:20 kafka | ===> Configuring ... 09:27:20 kafka | Running in Zookeeper mode... 09:27:20 kafka | ===> Running preflight checks ... 09:27:20 kafka | ===> Check if /var/lib/kafka/data is writable ... 09:27:20 kafka | ===> Check if Zookeeper is healthy ... 09:27:20 kafka | [2025-06-19 09:20:55,651] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,652] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,653] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,654] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,654] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,654] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,657] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,660] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:27:20 kafka | [2025-06-19 09:20:55,664] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:27:20 kafka | [2025-06-19 09:20:55,671] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:55,703] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:55,704] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:55,712] INFO Socket connection established, initiating session, client: /172.17.0.5:59772, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:55,770] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000022abf0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:55,896] INFO Session: 0x10000022abf0000 closed (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:55,897] INFO EventThread shut down for session: 0x10000022abf0000 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | Using log4j config /etc/kafka/log4j.properties 09:27:20 kafka | ===> Launching ... 09:27:20 kafka | ===> Launching kafka ... 09:27:20 kafka | [2025-06-19 09:20:56,556] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 09:27:20 kafka | [2025-06-19 09:20:56,880] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:27:20 kafka | [2025-06-19 09:20:56,969] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 09:27:20 kafka | [2025-06-19 09:20:56,970] INFO starting (kafka.server.KafkaServer) 09:27:20 kafka | [2025-06-19 09:20:56,970] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 09:27:20 kafka | [2025-06-19 09:20:56,983] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,987] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,988] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,990] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 09:27:20 kafka | [2025-06-19 09:20:56,993] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:27:20 kafka | [2025-06-19 09:20:56,999] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:57,001] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 09:27:20 kafka | [2025-06-19 09:20:57,006] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:57,012] INFO Socket connection established, initiating session, client: /172.17.0.5:59774, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:57,020] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000022abf0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:20:57,023] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 09:27:20 kafka | [2025-06-19 09:20:57,318] INFO Cluster ID = qtmK5DWmQ46tn_mNqWtZzg (kafka.server.KafkaServer) 09:27:20 kafka | [2025-06-19 09:20:57,324] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 09:27:20 kafka | [2025-06-19 09:20:57,377] INFO KafkaConfig values: 09:27:20 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 09:27:20 kafka | alter.config.policy.class.name = null 09:27:20 kafka | alter.log.dirs.replication.quota.window.num = 11 09:27:20 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 09:27:20 kafka | authorizer.class.name = 09:27:20 kafka | auto.create.topics.enable = true 09:27:20 kafka | auto.include.jmx.reporter = true 09:27:20 kafka | auto.leader.rebalance.enable = true 09:27:20 kafka | background.threads = 10 09:27:20 kafka | broker.heartbeat.interval.ms = 2000 09:27:20 kafka | broker.id = 1 09:27:20 kafka | broker.id.generation.enable = true 09:27:20 kafka | broker.rack = null 09:27:20 kafka | broker.session.timeout.ms = 9000 09:27:20 kafka | client.quota.callback.class = null 09:27:20 kafka | compression.type = producer 09:27:20 kafka | connection.failed.authentication.delay.ms = 100 09:27:20 kafka | connections.max.idle.ms = 600000 09:27:20 kafka | connections.max.reauth.ms = 0 09:27:20 kafka | control.plane.listener.name = null 09:27:20 kafka | controlled.shutdown.enable = true 09:27:20 kafka | controlled.shutdown.max.retries = 3 09:27:20 kafka | controlled.shutdown.retry.backoff.ms = 5000 09:27:20 kafka | controller.listener.names = null 09:27:20 kafka | controller.quorum.append.linger.ms = 25 09:27:20 kafka | controller.quorum.election.backoff.max.ms = 1000 09:27:20 kafka | controller.quorum.election.timeout.ms = 1000 09:27:20 kafka | controller.quorum.fetch.timeout.ms = 2000 09:27:20 kafka | controller.quorum.request.timeout.ms = 2000 09:27:20 kafka | controller.quorum.retry.backoff.ms = 20 09:27:20 kafka | controller.quorum.voters = [] 09:27:20 kafka | controller.quota.window.num = 11 09:27:20 kafka | controller.quota.window.size.seconds = 1 09:27:20 kafka | controller.socket.timeout.ms = 30000 09:27:20 kafka | create.topic.policy.class.name = null 09:27:20 kafka | default.replication.factor = 1 09:27:20 kafka | delegation.token.expiry.check.interval.ms = 3600000 09:27:20 kafka | delegation.token.expiry.time.ms = 86400000 09:27:20 kafka | delegation.token.master.key = null 09:27:20 kafka | delegation.token.max.lifetime.ms = 604800000 09:27:20 kafka | delegation.token.secret.key = null 09:27:20 kafka | delete.records.purgatory.purge.interval.requests = 1 09:27:20 kafka | delete.topic.enable = true 09:27:20 kafka | early.start.listeners = null 09:27:20 kafka | fetch.max.bytes = 57671680 09:27:20 kafka | fetch.purgatory.purge.interval.requests = 1000 09:27:20 kafka | group.initial.rebalance.delay.ms = 3000 09:27:20 kafka | group.max.session.timeout.ms = 1800000 09:27:20 kafka | group.max.size = 2147483647 09:27:20 kafka | group.min.session.timeout.ms = 6000 09:27:20 kafka | initial.broker.registration.timeout.ms = 60000 09:27:20 kafka | inter.broker.listener.name = PLAINTEXT 09:27:20 kafka | inter.broker.protocol.version = 3.4-IV0 09:27:20 kafka | kafka.metrics.polling.interval.secs = 10 09:27:20 kafka | kafka.metrics.reporters = [] 09:27:20 kafka | leader.imbalance.check.interval.seconds = 300 09:27:20 kafka | leader.imbalance.per.broker.percentage = 10 09:27:20 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 09:27:20 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 09:27:20 kafka | log.cleaner.backoff.ms = 15000 09:27:20 kafka | log.cleaner.dedupe.buffer.size = 134217728 09:27:20 kafka | log.cleaner.delete.retention.ms = 86400000 09:27:20 kafka | log.cleaner.enable = true 09:27:20 kafka | log.cleaner.io.buffer.load.factor = 0.9 09:27:20 kafka | log.cleaner.io.buffer.size = 524288 09:27:20 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 09:27:20 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 09:27:20 kafka | log.cleaner.min.cleanable.ratio = 0.5 09:27:20 kafka | log.cleaner.min.compaction.lag.ms = 0 09:27:20 kafka | log.cleaner.threads = 1 09:27:20 kafka | log.cleanup.policy = [delete] 09:27:20 kafka | log.dir = /tmp/kafka-logs 09:27:20 kafka | log.dirs = /var/lib/kafka/data 09:27:20 kafka | log.flush.interval.messages = 9223372036854775807 09:27:20 kafka | log.flush.interval.ms = null 09:27:20 kafka | log.flush.offset.checkpoint.interval.ms = 60000 09:27:20 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 09:27:20 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 09:27:20 kafka | log.index.interval.bytes = 4096 09:27:20 kafka | log.index.size.max.bytes = 10485760 09:27:20 kafka | log.message.downconversion.enable = true 09:27:20 kafka | log.message.format.version = 3.0-IV1 09:27:20 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 09:27:20 kafka | log.message.timestamp.type = CreateTime 09:27:20 kafka | log.preallocate = false 09:27:20 kafka | log.retention.bytes = -1 09:27:20 kafka | log.retention.check.interval.ms = 300000 09:27:20 kafka | log.retention.hours = 168 09:27:20 kafka | log.retention.minutes = null 09:27:20 kafka | log.retention.ms = null 09:27:20 kafka | log.roll.hours = 168 09:27:20 kafka | log.roll.jitter.hours = 0 09:27:20 kafka | log.roll.jitter.ms = null 09:27:20 kafka | log.roll.ms = null 09:27:20 kafka | log.segment.bytes = 1073741824 09:27:20 kafka | log.segment.delete.delay.ms = 60000 09:27:20 kafka | max.connection.creation.rate = 2147483647 09:27:20 kafka | max.connections = 2147483647 09:27:20 kafka | max.connections.per.ip = 2147483647 09:27:20 kafka | max.connections.per.ip.overrides = 09:27:20 kafka | max.incremental.fetch.session.cache.slots = 1000 09:27:20 kafka | message.max.bytes = 1048588 09:27:20 kafka | metadata.log.dir = null 09:27:20 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 09:27:20 kafka | metadata.log.max.snapshot.interval.ms = 3600000 09:27:20 kafka | metadata.log.segment.bytes = 1073741824 09:27:20 kafka | metadata.log.segment.min.bytes = 8388608 09:27:20 kafka | metadata.log.segment.ms = 604800000 09:27:20 kafka | metadata.max.idle.interval.ms = 500 09:27:20 kafka | metadata.max.retention.bytes = 104857600 09:27:20 kafka | metadata.max.retention.ms = 604800000 09:27:20 kafka | metric.reporters = [] 09:27:20 kafka | metrics.num.samples = 2 09:27:20 kafka | metrics.recording.level = INFO 09:27:20 kafka | metrics.sample.window.ms = 30000 09:27:20 kafka | min.insync.replicas = 1 09:27:20 kafka | node.id = 1 09:27:20 kafka | num.io.threads = 8 09:27:20 kafka | num.network.threads = 3 09:27:20 kafka | num.partitions = 1 09:27:20 kafka | num.recovery.threads.per.data.dir = 1 09:27:20 kafka | num.replica.alter.log.dirs.threads = null 09:27:20 kafka | num.replica.fetchers = 1 09:27:20 kafka | offset.metadata.max.bytes = 4096 09:27:20 kafka | offsets.commit.required.acks = -1 09:27:20 kafka | offsets.commit.timeout.ms = 5000 09:27:20 kafka | offsets.load.buffer.size = 5242880 09:27:20 kafka | offsets.retention.check.interval.ms = 600000 09:27:20 kafka | offsets.retention.minutes = 10080 09:27:20 kafka | offsets.topic.compression.codec = 0 09:27:20 kafka | offsets.topic.num.partitions = 50 09:27:20 kafka | offsets.topic.replication.factor = 1 09:27:20 kafka | offsets.topic.segment.bytes = 104857600 09:27:20 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 09:27:20 kafka | password.encoder.iterations = 4096 09:27:20 kafka | password.encoder.key.length = 128 09:27:20 kafka | password.encoder.keyfactory.algorithm = null 09:27:20 kafka | password.encoder.old.secret = null 09:27:20 kafka | password.encoder.secret = null 09:27:20 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 09:27:20 kafka | process.roles = [] 09:27:20 kafka | producer.id.expiration.check.interval.ms = 600000 09:27:20 kafka | producer.id.expiration.ms = 86400000 09:27:20 kafka | producer.purgatory.purge.interval.requests = 1000 09:27:20 kafka | queued.max.request.bytes = -1 09:27:20 kafka | queued.max.requests = 500 09:27:20 kafka | quota.window.num = 11 09:27:20 kafka | quota.window.size.seconds = 1 09:27:20 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 09:27:20 kafka | remote.log.manager.task.interval.ms = 30000 09:27:20 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 09:27:20 kafka | remote.log.manager.task.retry.backoff.ms = 500 09:27:20 kafka | remote.log.manager.task.retry.jitter = 0.2 09:27:20 kafka | remote.log.manager.thread.pool.size = 10 09:27:20 kafka | remote.log.metadata.manager.class.name = null 09:27:20 kafka | remote.log.metadata.manager.class.path = null 09:27:20 kafka | remote.log.metadata.manager.impl.prefix = null 09:27:20 kafka | remote.log.metadata.manager.listener.name = null 09:27:20 kafka | remote.log.reader.max.pending.tasks = 100 09:27:20 kafka | remote.log.reader.threads = 10 09:27:20 kafka | remote.log.storage.manager.class.name = null 09:27:20 kafka | remote.log.storage.manager.class.path = null 09:27:20 kafka | remote.log.storage.manager.impl.prefix = null 09:27:20 kafka | remote.log.storage.system.enable = false 09:27:20 kafka | replica.fetch.backoff.ms = 1000 09:27:20 kafka | replica.fetch.max.bytes = 1048576 09:27:20 kafka | replica.fetch.min.bytes = 1 09:27:20 kafka | replica.fetch.response.max.bytes = 10485760 09:27:20 kafka | replica.fetch.wait.max.ms = 500 09:27:20 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 09:27:20 kafka | replica.lag.time.max.ms = 30000 09:27:20 kafka | replica.selector.class = null 09:27:20 kafka | replica.socket.receive.buffer.bytes = 65536 09:27:20 kafka | replica.socket.timeout.ms = 30000 09:27:20 kafka | replication.quota.window.num = 11 09:27:20 kafka | replication.quota.window.size.seconds = 1 09:27:20 kafka | request.timeout.ms = 30000 09:27:20 kafka | reserved.broker.max.id = 1000 09:27:20 kafka | sasl.client.callback.handler.class = null 09:27:20 kafka | sasl.enabled.mechanisms = [GSSAPI] 09:27:20 kafka | sasl.jaas.config = null 09:27:20 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:20 kafka | sasl.kerberos.min.time.before.relogin = 60000 09:27:20 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 09:27:20 kafka | sasl.kerberos.service.name = null 09:27:20 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:20 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:20 kafka | sasl.login.callback.handler.class = null 09:27:20 kafka | sasl.login.class = null 09:27:20 kafka | sasl.login.connect.timeout.ms = null 09:27:20 kafka | sasl.login.read.timeout.ms = null 09:27:20 kafka | sasl.login.refresh.buffer.seconds = 300 09:27:20 kafka | sasl.login.refresh.min.period.seconds = 60 09:27:20 kafka | sasl.login.refresh.window.factor = 0.8 09:27:20 kafka | sasl.login.refresh.window.jitter = 0.05 09:27:20 kafka | sasl.login.retry.backoff.max.ms = 10000 09:27:20 kafka | sasl.login.retry.backoff.ms = 100 09:27:20 kafka | sasl.mechanism.controller.protocol = GSSAPI 09:27:20 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 09:27:20 kafka | sasl.oauthbearer.clock.skew.seconds = 30 09:27:20 kafka | sasl.oauthbearer.expected.audience = null 09:27:20 kafka | sasl.oauthbearer.expected.issuer = null 09:27:20 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:20 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:20 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:20 kafka | sasl.oauthbearer.jwks.endpoint.url = null 09:27:20 kafka | sasl.oauthbearer.scope.claim.name = scope 09:27:20 kafka | sasl.oauthbearer.sub.claim.name = sub 09:27:20 kafka | sasl.oauthbearer.token.endpoint.url = null 09:27:20 kafka | sasl.server.callback.handler.class = null 09:27:20 kafka | sasl.server.max.receive.size = 524288 09:27:20 kafka | security.inter.broker.protocol = PLAINTEXT 09:27:20 kafka | security.providers = null 09:27:20 kafka | socket.connection.setup.timeout.max.ms = 30000 09:27:20 kafka | socket.connection.setup.timeout.ms = 10000 09:27:20 kafka | socket.listen.backlog.size = 50 09:27:20 kafka | socket.receive.buffer.bytes = 102400 09:27:20 kafka | socket.request.max.bytes = 104857600 09:27:20 kafka | socket.send.buffer.bytes = 102400 09:27:20 kafka | ssl.cipher.suites = [] 09:27:20 kafka | ssl.client.auth = none 09:27:20 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:20 kafka | ssl.endpoint.identification.algorithm = https 09:27:20 kafka | ssl.engine.factory.class = null 09:27:20 kafka | ssl.key.password = null 09:27:20 kafka | ssl.keymanager.algorithm = SunX509 09:27:20 kafka | ssl.keystore.certificate.chain = null 09:27:20 kafka | ssl.keystore.key = null 09:27:20 kafka | ssl.keystore.location = null 09:27:20 kafka | ssl.keystore.password = null 09:27:20 kafka | ssl.keystore.type = JKS 09:27:20 kafka | ssl.principal.mapping.rules = DEFAULT 09:27:20 kafka | ssl.protocol = TLSv1.3 09:27:20 kafka | ssl.provider = null 09:27:20 kafka | ssl.secure.random.implementation = null 09:27:20 kafka | ssl.trustmanager.algorithm = PKIX 09:27:20 kafka | ssl.truststore.certificates = null 09:27:20 kafka | ssl.truststore.location = null 09:27:20 kafka | ssl.truststore.password = null 09:27:20 kafka | ssl.truststore.type = JKS 09:27:20 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 09:27:20 kafka | transaction.max.timeout.ms = 900000 09:27:20 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 09:27:20 kafka | transaction.state.log.load.buffer.size = 5242880 09:27:20 kafka | transaction.state.log.min.isr = 2 09:27:20 kafka | transaction.state.log.num.partitions = 50 09:27:20 kafka | transaction.state.log.replication.factor = 3 09:27:20 kafka | transaction.state.log.segment.bytes = 104857600 09:27:20 kafka | transactional.id.expiration.ms = 604800000 09:27:20 kafka | unclean.leader.election.enable = false 09:27:20 kafka | zookeeper.clientCnxnSocket = null 09:27:20 kafka | zookeeper.connect = zookeeper:2181 09:27:20 kafka | zookeeper.connection.timeout.ms = null 09:27:20 kafka | zookeeper.max.in.flight.requests = 10 09:27:20 kafka | zookeeper.metadata.migration.enable = false 09:27:20 kafka | zookeeper.session.timeout.ms = 18000 09:27:20 kafka | zookeeper.set.acl = false 09:27:20 kafka | zookeeper.ssl.cipher.suites = null 09:27:20 kafka | zookeeper.ssl.client.enable = false 09:27:20 kafka | zookeeper.ssl.crl.enable = false 09:27:20 kafka | zookeeper.ssl.enabled.protocols = null 09:27:20 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 09:27:20 kafka | zookeeper.ssl.keystore.location = null 09:27:20 kafka | zookeeper.ssl.keystore.password = null 09:27:20 kafka | zookeeper.ssl.keystore.type = null 09:27:20 kafka | zookeeper.ssl.ocsp.enable = false 09:27:20 kafka | zookeeper.ssl.protocol = TLSv1.2 09:27:20 kafka | zookeeper.ssl.truststore.location = null 09:27:20 kafka | zookeeper.ssl.truststore.password = null 09:27:20 kafka | zookeeper.ssl.truststore.type = null 09:27:20 kafka | (kafka.server.KafkaConfig) 09:27:20 kafka | [2025-06-19 09:20:57,416] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:27:20 kafka | [2025-06-19 09:20:57,416] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:27:20 kafka | [2025-06-19 09:20:57,422] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:27:20 kafka | [2025-06-19 09:20:57,426] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:27:20 kafka | [2025-06-19 09:20:57,470] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:20:57,474] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:20:57,486] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:20:57,487] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:20:57,489] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:20:57,507] INFO Starting the log cleaner (kafka.log.LogCleaner) 09:27:20 kafka | [2025-06-19 09:20:57,551] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 09:27:20 kafka | [2025-06-19 09:20:57,569] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 09:27:20 kafka | [2025-06-19 09:20:57,589] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 09:27:20 kafka | [2025-06-19 09:20:57,637] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 09:27:20 kafka | [2025-06-19 09:20:57,984] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:27:20 kafka | [2025-06-19 09:20:57,987] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 09:27:20 kafka | [2025-06-19 09:20:58,015] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 09:27:20 kafka | [2025-06-19 09:20:58,015] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:27:20 kafka | [2025-06-19 09:20:58,016] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 09:27:20 kafka | [2025-06-19 09:20:58,019] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 09:27:20 kafka | [2025-06-19 09:20:58,024] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 09:27:20 kafka | [2025-06-19 09:20:58,046] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,047] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,054] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,054] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,078] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 09:27:20 kafka | [2025-06-19 09:20:58,101] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 09:27:20 kafka | [2025-06-19 09:20:58,130] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750324858117,1750324858117,1,0,0,72057603345022977,258,0,27 09:27:20 kafka | (kafka.zk.KafkaZkClient) 09:27:20 kafka | [2025-06-19 09:20:58,132] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 09:27:20 kafka | [2025-06-19 09:20:58,190] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 09:27:20 kafka | [2025-06-19 09:20:58,200] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,204] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,211] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,222] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 09:27:20 kafka | [2025-06-19 09:20:58,228] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:20:58,237] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,241] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,244] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:20:58,247] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 09:27:20 kafka | [2025-06-19 09:20:58,272] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 09:27:20 kafka | [2025-06-19 09:20:58,277] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 09:27:20 kafka | [2025-06-19 09:20:58,282] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 09:27:20 kafka | [2025-06-19 09:20:58,285] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,286] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 09:27:20 kafka | [2025-06-19 09:20:58,295] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,308] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,313] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,323] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:27:20 kafka | [2025-06-19 09:20:58,330] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,335] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,340] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 09:27:20 kafka | [2025-06-19 09:20:58,354] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,355] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,356] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,356] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,357] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 09:27:20 kafka | [2025-06-19 09:20:58,357] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 09:27:20 kafka | [2025-06-19 09:20:58,367] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,367] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,367] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,368] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 09:27:20 kafka | [2025-06-19 09:20:58,370] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,374] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:20:58,379] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 09:27:20 kafka | [2025-06-19 09:20:58,382] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,382] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,394] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,394] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,395] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,395] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,398] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 09:27:20 kafka | [2025-06-19 09:20:58,398] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,400] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 09:27:20 kafka | [2025-06-19 09:20:58,416] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,416] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,418] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,419] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,421] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,431] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 09:27:20 kafka | [2025-06-19 09:20:58,431] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 09:27:20 kafka | [2025-06-19 09:20:58,431] INFO Kafka startTimeMs: 1750324858412 (org.apache.kafka.common.utils.AppInfoParser) 09:27:20 kafka | [2025-06-19 09:20:58,432] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 09:27:20 kafka | [2025-06-19 09:20:58,442] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:20:58,497] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:20:58,530] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:27:20 kafka | [2025-06-19 09:20:58,568] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:27:20 kafka | [2025-06-19 09:21:07,567] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:21:07,568] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:21:21,898] WARN Client session timed out, have not heard from server in 14331ms for session id 0x10000022abf0001 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:21:21,905] WARN Session 0x10000022abf0001 for server zookeeper/172.17.0.4:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | org.apache.zookeeper.ClientCnxn$SessionTimeoutException: Client session timed out, have not heard from server in 14331ms for session id 0x10000022abf0001 09:27:20 kafka | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1257) 09:27:20 kafka | [2025-06-19 09:21:23,535] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:21:23,536] INFO Socket connection established, initiating session, client: /172.17.0.5:55724, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:21:23,540] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000022abf0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 09:27:20 kafka | [2025-06-19 09:21:53,968] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:21:53,970] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:27:20 kafka | [2025-06-19 09:21:53,970] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:27:20 kafka | [2025-06-19 09:21:54,801] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:21:54,838] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(3o7ikwD8TaWmaburTTLHvg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(z_08NlOLSPm4iNqsLXO7dQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:21:54,840] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:21:54,842] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,847] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,852] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,852] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,852] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,852] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,853] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,854] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,855] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,856] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,856] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,856] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,856] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,956] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,961] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,964] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,969] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,971] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,972] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,973] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,975] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,976] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,977] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,978] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,979] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,980] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,980] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:54,981] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,019] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,020] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,021] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,022] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,023] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,024] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,024] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,024] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,024] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,024] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,025] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 09:27:20 kafka | [2025-06-19 09:21:55,026] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,074] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,084] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,086] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,087] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,088] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,102] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,103] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,104] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,104] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,104] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,112] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,113] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,113] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,113] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,113] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,119] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,120] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,120] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,120] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,120] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,128] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,129] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,129] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,129] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,129] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,136] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,137] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,137] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,137] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,137] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,145] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,147] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,147] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,147] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,147] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,155] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,156] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,156] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,156] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,156] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,165] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,166] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,167] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,167] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,167] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,174] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,175] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,175] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,175] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,175] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,183] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,184] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,184] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,184] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,184] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,192] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,192] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,192] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,192] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,192] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,199] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,200] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,200] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,200] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,200] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,209] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,210] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,210] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,210] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,210] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,218] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,219] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,219] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,219] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,219] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,229] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,230] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,230] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,230] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,230] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,242] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,245] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,245] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,245] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,245] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,252] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,253] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,253] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,253] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,253] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,260] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,262] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,262] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,262] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,262] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,269] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,270] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,270] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,270] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,270] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,277] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,278] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,278] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,278] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,278] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,289] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,289] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,290] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,290] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,290] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,297] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,298] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,298] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,298] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,298] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,305] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,305] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,305] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,305] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,305] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,313] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,314] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,314] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,314] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,314] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,321] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,321] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,321] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,321] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,322] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,329] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,330] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,330] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,330] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,330] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,337] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,337] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,337] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,337] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,337] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,345] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,346] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,346] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,346] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,346] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,352] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,353] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,353] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,353] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,353] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,360] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,360] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,360] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,360] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,360] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,366] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,367] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,367] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,367] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,367] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,373] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,374] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,374] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,374] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,374] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,382] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,382] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,382] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,382] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,382] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,388] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,389] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,389] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,389] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,389] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(3o7ikwD8TaWmaburTTLHvg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,397] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,397] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,397] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,398] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,398] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,404] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,405] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,405] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,405] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,405] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,413] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,413] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,414] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,414] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,414] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,421] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,423] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,423] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,423] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,423] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,432] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,433] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,433] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,433] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,433] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,441] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,441] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,441] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,441] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,441] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,451] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,451] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,451] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,451] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,451] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,460] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,461] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,461] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,461] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,461] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,468] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,469] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,469] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,469] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,469] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,476] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,477] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,477] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,477] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,477] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,483] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,484] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,484] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,484] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,484] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,491] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,491] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,492] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,492] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,492] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,498] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,499] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,499] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,499] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,499] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,507] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,508] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,508] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,508] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,508] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,514] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,514] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,515] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,515] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,515] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,522] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:21:55,522] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:21:55,523] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,523] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:21:55,523] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(z_08NlOLSPm4iNqsLXO7dQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,531] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,532] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,537] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,539] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,540] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,541] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,542] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,545] INFO [Broker id=1] Finished LeaderAndIsr request in 571ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,550] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=z_08NlOLSPm4iNqsLXO7dQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=3o7ikwD8TaWmaburTTLHvg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,554] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,555] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,555] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,555] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,555] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,556] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,557] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,558] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,558] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,559] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,560] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,561] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,561] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:27:20 kafka | [2025-06-19 09:21:55,561] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,561] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:21:55,678] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:55,692] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:56,425] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c6a8bb97-2c08-4522-a637-4a0267c3b861 in Empty state. Created a new member id consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:56,429] INFO [GroupCoordinator 1]: Preparing to rebalance group c6a8bb97-2c08-4522-a637-4a0267c3b861 in state PreparingRebalance with old generation 0 (__consumer_offsets-18) (reason: Adding new member consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce with group instance id None; client reason: need to re-join with the given member-id: consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:58,704] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:58,729] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:59,430] INFO [GroupCoordinator 1]: Stabilized group c6a8bb97-2c08-4522-a637-4a0267c3b861 generation 1 (__consumer_offsets-18) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:21:59,436] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce for group c6a8bb97-2c08-4522-a637-4a0267c3b861 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:22:39,058] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-a1a0e6bc-fd31-427e-b9d4-3bc6b2c54895 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:22:39,060] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-a1a0e6bc-fd31-427e-b9d4-3bc6b2c54895 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:22:42,061] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:22:42,066] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-a1a0e6bc-fd31-427e-b9d4-3bc6b2c54895 for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:23:49,815] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:27:20 kafka | [2025-06-19 09:23:49,833] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(XqjpUMEBRlqjaCNitNyIZg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:23:49,834] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:23:49,834] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,834] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,835] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,835] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,849] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,849] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,849] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,850] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,850] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,850] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,853] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,853] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,854] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,854] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 09:27:20 kafka | [2025-06-19 09:23:49,855] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,859] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:27:20 kafka | [2025-06-19 09:23:49,860] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 09:27:20 kafka | [2025-06-19 09:23:49,861] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:23:49,861] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 09:27:20 kafka | [2025-06-19 09:23:49,861] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(XqjpUMEBRlqjaCNitNyIZg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,870] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,871] INFO [Broker id=1] Finished LeaderAndIsr request in 18ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,872] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=XqjpUMEBRlqjaCNitNyIZg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,874] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,874] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:27:20 kafka | [2025-06-19 09:23:49,875] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:27:20 kafka | [2025-06-19 09:25:22,162] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-8663cfc1-b241-4f5e-92d9-e09b74c8be2e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:22,164] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-8663cfc1-b241-4f5e-92d9-e09b74c8be2e with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:25,165] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:25,168] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-8663cfc1-b241-4f5e-92d9-e09b74c8be2e for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:25,282] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-8663cfc1-b241-4f5e-92d9-e09b74c8be2e on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:25,282] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:25,284] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-8663cfc1-b241-4f5e-92d9-e09b74c8be2e, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:47,987] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-edec6ee0-defe-4544-af6c-55beaa020ced and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:47,988] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-edec6ee0-defe-4544-af6c-55beaa020ced with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:50,988] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:50,991] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-edec6ee0-defe-4544-af6c-55beaa020ced for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:50,998] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-edec6ee0-defe-4544-af6c-55beaa020ced on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:50,998] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:25:50,998] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-edec6ee0-defe-4544-af6c-55beaa020ced, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:07,570] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:26:07,570] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:26:07,577] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:26:07,579] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) 09:27:20 kafka | [2025-06-19 09:26:13,583] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-1ac78432-d736-4b26-842c-748c2d78c2c1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:13,584] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-1ac78432-d736-4b26-842c-748c2d78c2c1 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:16,586] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:16,588] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-1ac78432-d736-4b26-842c-748c2d78c2c1 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:16,593] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-1ac78432-d736-4b26-842c-748c2d78c2c1 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:16,593] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 09:27:20 kafka | [2025-06-19 09:26:16,594] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-1ac78432-d736-4b26-842c-748c2d78c2c1, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 09:27:20 policy-api | Waiting for policy-db-migrator port 6824... 09:27:20 policy-api | policy-db-migrator (172.17.0.7:6824) open 09:27:20 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 09:27:20 policy-api | 09:27:20 policy-api | . ____ _ __ _ _ 09:27:20 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:27:20 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:27:20 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:27:20 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 09:27:20 policy-api | =========|_|==============|___/=/_/_/_/ 09:27:20 policy-api | 09:27:20 policy-api | :: Spring Boot :: (v3.4.6) 09:27:20 policy-api | 09:27:20 policy-api | [2025-06-19T09:21:32.166+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 09:27:20 policy-api | [2025-06-19T09:21:32.253+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 48 (/app/api.jar started by policy in /opt/app/policy/api/bin) 09:27:20 policy-api | [2025-06-19T09:21:32.254+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 09:27:20 policy-api | [2025-06-19T09:21:33.785+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:27:20 policy-api | [2025-06-19T09:21:33.968+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 172 ms. Found 6 JPA repository interfaces. 09:27:20 policy-api | [2025-06-19T09:21:34.693+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 09:27:20 policy-api | [2025-06-19T09:21:34.708+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:27:20 policy-api | [2025-06-19T09:21:34.710+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:27:20 policy-api | [2025-06-19T09:21:34.710+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 09:27:20 policy-api | [2025-06-19T09:21:34.752+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 09:27:20 policy-api | [2025-06-19T09:21:34.752+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2441 ms 09:27:20 policy-api | [2025-06-19T09:21:35.120+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:27:20 policy-api | [2025-06-19T09:21:35.211+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 09:27:20 policy-api | [2025-06-19T09:21:35.262+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:27:20 policy-api | [2025-06-19T09:21:35.663+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:27:20 policy-api | [2025-06-19T09:21:35.707+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:27:20 policy-api | [2025-06-19T09:21:35.932+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c 09:27:20 policy-api | [2025-06-19T09:21:35.934+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:27:20 policy-api | [2025-06-19T09:21:36.022+00:00|INFO|pooling|main] HHH10001005: Database info: 09:27:20 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 09:27:20 policy-api | Database driver: undefined/unknown 09:27:20 policy-api | Database version: 16.4 09:27:20 policy-api | Autocommit mode: undefined/unknown 09:27:20 policy-api | Isolation level: undefined/unknown 09:27:20 policy-api | Minimum pool size: undefined/unknown 09:27:20 policy-api | Maximum pool size: undefined/unknown 09:27:20 policy-api | [2025-06-19T09:21:38.119+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:27:20 policy-api | [2025-06-19T09:21:38.122+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:27:20 policy-api | [2025-06-19T09:21:38.778+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 09:27:20 policy-api | [2025-06-19T09:21:39.653+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 09:27:20 policy-api | [2025-06-19T09:21:40.894+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:27:20 policy-api | [2025-06-19T09:21:40.950+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 09:27:20 policy-api | [2025-06-19T09:21:41.723+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 09:27:20 policy-api | [2025-06-19T09:21:41.881+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:27:20 policy-api | [2025-06-19T09:21:41.905+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 09:27:20 policy-api | [2025-06-19T09:21:41.929+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.425 seconds (process running for 10.987) 09:27:20 policy-api | [2025-06-19T09:22:39.940+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:27:20 policy-api | [2025-06-19T09:22:39.940+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 09:27:20 policy-api | [2025-06-19T09:22:39.941+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 09:27:20 policy-api | [2025-06-19T09:24:59.888+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-5] ***** OrderedServiceImpl implementers: 09:27:20 policy-api | [] 09:27:20 policy-api | [2025-06-19T09:26:16.941+00:00|WARN|CommonRestController|http-nio-6969-exec-10] "incoming fragment" INVALID, item has status INVALID 09:27:20 policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity 09:27:20 policy-api | 09:27:20 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 09:27:20 policy-csit | Run Robot test 09:27:20 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 09:27:20 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 09:27:20 policy-csit | -v POLICY_API_IP:policy-api:6969 09:27:20 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 09:27:20 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 09:27:20 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 09:27:20 policy-csit | -v APEX_IP:policy-apex-pdp:6969 09:27:20 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 09:27:20 policy-csit | -v KAFKA_IP:kafka:9092 09:27:20 policy-csit | -v PROMETHEUS_IP:prometheus:9090 09:27:20 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 09:27:20 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 09:27:20 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 09:27:20 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 09:27:20 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 09:27:20 policy-csit | -v TEMP_FOLDER:/tmp/distribution 09:27:20 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 09:27:20 policy-csit | -v TEST_ENV:docker 09:27:20 policy-csit | -v JAEGER_IP:jaeger:16686 09:27:20 policy-csit | Starting Robot test suites ... 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidatesZonePolicy | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidatesVehiclePolicy | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidatesAbacPolicy | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 09:27:20 policy-csit | 5 tests, 5 passed, 0 failed 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 09:27:20 policy-csit | ------------------------------------------------------------------------------ 09:27:20 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 09:27:20 policy-csit | 5 tests, 5 passed, 0 failed 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 09:27:20 policy-csit | 10 tests, 10 passed, 0 failed 09:27:20 policy-csit | ============================================================================== 09:27:20 policy-csit | Output: /tmp/results/output.xml 09:27:20 policy-csit | Log: /tmp/results/log.html 09:27:20 policy-csit | Report: /tmp/results/report.html 09:27:20 policy-csit | RESULT: 0 09:27:20 policy-db-migrator | Waiting for postgres port 5432... 09:27:20 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 09:27:20 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 09:27:20 policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! 09:27:20 policy-db-migrator | Initializing policyadmin... 09:27:20 policy-db-migrator | 321 blocks 09:27:20 policy-db-migrator | Preparing upgrade release version: 0800 09:27:20 policy-db-migrator | Preparing upgrade release version: 0900 09:27:20 policy-db-migrator | Preparing upgrade release version: 1000 09:27:20 policy-db-migrator | Preparing upgrade release version: 1100 09:27:20 policy-db-migrator | Preparing upgrade release version: 1200 09:27:20 policy-db-migrator | Preparing upgrade release version: 1300 09:27:20 policy-db-migrator | Done 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | -------------+--------- 09:27:20 policy-db-migrator | policyadmin | 0 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:27:20 policy-db-migrator | (0 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | upgrade: 0 -> 1300 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0450-pdpgroup.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0470-pdp.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0570-toscadatatype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0630-toscanodetype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0660-toscaparameter.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0670-toscapolicies.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0690-toscapolicy.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0730-toscaproperty.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0770-toscarequirement.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0780-toscarequirements.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0820-toscatrigger.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-pdp.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0210-sequence.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0220-sequence.sql 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0120-toscatrigger.sql 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0140-toscaparameter.sql 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0150-toscaproperty.sql 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-upgrade.sql 09:27:20 policy-db-migrator | msg 09:27:20 policy-db-migrator | --------------------------- 09:27:20 policy-db-migrator | upgrade to 1100 completed 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:27:20 policy-db-migrator | DROP INDEX 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0120-audit_sequence.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 09:27:20 policy-db-migrator | DROP TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | policyadmin: OK: upgrade (1300) 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | -------------+--------- 09:27:20 policy-db-migrator | policyadmin | 1300 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:27:20 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.545846 09:27:20 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.596153 09:27:20 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.648422 09:27:20 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.704437 09:27:20 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.761227 09:27:20 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.82094 09:27:20 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.875069 09:27:20 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.92272 09:27:20 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:56.976591 09:27:20 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.018688 09:27:20 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.05965 09:27:20 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.102461 09:27:20 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.153927 09:27:20 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.208766 09:27:20 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.254948 09:27:20 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.310572 09:27:20 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.363641 09:27:20 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.417959 09:27:20 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.473608 09:27:20 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.528813 09:27:20 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.576649 09:27:20 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.625147 09:27:20 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.670939 09:27:20 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.718229 09:27:20 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.770137 09:27:20 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.821895 09:27:20 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.871941 09:27:20 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.922115 09:27:20 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:57.975266 09:27:20 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.030198 09:27:20 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.078801 09:27:20 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.131396 09:27:20 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.188782 09:27:20 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.239372 09:27:20 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.293571 09:27:20 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.341661 09:27:20 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.404214 09:27:20 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.459792 09:27:20 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.519537 09:27:20 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.576329 09:27:20 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.625755 09:27:20 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.678172 09:27:20 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.727041 09:27:20 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.780972 09:27:20 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.836428 09:27:20 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.886551 09:27:20 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.938391 09:27:20 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:58.997155 09:27:20 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:59.052259 09:27:20 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:59.105683 09:27:20 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:59.161775 09:27:20 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:59.212306 09:27:20 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:59.26248 09:27:20 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:20:59.613004 09:27:20 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.042188 09:27:20 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.092481 09:27:20 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.160705 09:27:20 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.211489 09:27:20 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.264655 09:27:20 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.325186 09:27:20 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.379712 09:27:20 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.433551 09:27:20 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.4886 09:27:20 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.546486 09:27:20 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.598455 09:27:20 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.653931 09:27:20 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.704696 09:27:20 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.759047 09:27:20 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.815588 09:27:20 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.885093 09:27:20 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:22.944615 09:27:20 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.00717 09:27:20 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.064499 09:27:20 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.118278 09:27:20 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.174654 09:27:20 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.233299 09:27:20 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.286403 09:27:20 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.338526 09:27:20 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.394929 09:27:20 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.449145 09:27:20 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.499832 09:27:20 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.55501 09:27:20 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.603234 09:27:20 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.653636 09:27:20 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.705042 09:27:20 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.758432 09:27:20 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.810366 09:27:20 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.863658 09:27:20 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.918022 09:27:20 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:23.954923 09:27:20 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:24.00237 09:27:20 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:24.052992 09:27:20 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:24.101855 09:27:20 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:24.156545 09:27:20 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:24.211916 09:27:20 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906250920560800u | 1 | 2025-06-19 09:21:24.264318 09:27:20 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.316508 09:27:20 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.36928 09:27:20 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.420527 09:27:20 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.475184 09:27:20 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.535263 09:27:20 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.589316 09:27:20 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.638427 09:27:20 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.687079 09:27:20 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.737488 09:27:20 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.79254 09:27:20 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.847429 09:27:20 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.902513 09:27:20 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1906250920560900u | 1 | 2025-06-19 09:21:24.954703 09:27:20 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.011315 09:27:20 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.067463 09:27:20 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.11853 09:27:20 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.185612 09:27:20 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.238741 09:27:20 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.298252 09:27:20 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.36113 09:27:20 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.415835 09:27:20 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1906250920561000u | 1 | 2025-06-19 09:21:25.466653 09:27:20 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1906250920561100u | 1 | 2025-06-19 09:21:25.5137 09:27:20 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1906250920561200u | 1 | 2025-06-19 09:21:25.562918 09:27:20 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1906250920561200u | 1 | 2025-06-19 09:21:25.623559 09:27:20 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1906250920561200u | 1 | 2025-06-19 09:21:25.682501 09:27:20 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1906250920561200u | 1 | 2025-06-19 09:21:25.737173 09:27:20 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1906250920561300u | 1 | 2025-06-19 09:21:25.790315 09:27:20 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1906250920561300u | 1 | 2025-06-19 09:21:25.841863 09:27:20 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1906250920561300u | 1 | 2025-06-19 09:21:25.891724 09:27:20 policy-db-migrator | (126 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | policyadmin: OK @ 1300 09:27:20 policy-db-migrator | Initializing clampacm... 09:27:20 policy-db-migrator | 97 blocks 09:27:20 policy-db-migrator | Preparing upgrade release version: 1400 09:27:20 policy-db-migrator | Preparing upgrade release version: 1500 09:27:20 policy-db-migrator | Preparing upgrade release version: 1600 09:27:20 policy-db-migrator | Preparing upgrade release version: 1601 09:27:20 policy-db-migrator | Preparing upgrade release version: 1700 09:27:20 policy-db-migrator | Preparing upgrade release version: 1701 09:27:20 policy-db-migrator | Done 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | ----------+--------- 09:27:20 policy-db-migrator | clampacm | 0 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:27:20 policy-db-migrator | (0 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | upgrade: 0 -> 1701 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0500-participant.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0300-participantreplica.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0400-participant.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-message.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-messagejob.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0200-automationcomposition.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0800-participantreplica.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | UPDATE 0 09:27:20 policy-db-migrator | ALTER TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | clampacm: OK: upgrade (1701) 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | ----------+--------- 09:27:20 policy-db-migrator | clampacm | 1701 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:27:20 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.571023 09:27:20 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.628073 09:27:20 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.68554 09:27:20 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.740651 09:27:20 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.800207 09:27:20 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.853111 09:27:20 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.901434 09:27:20 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:26.960416 09:27:20 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:27.017164 09:27:20 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:27.072045 09:27:20 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:27.12811 09:27:20 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:27.186156 09:27:20 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1906250921261400u | 1 | 2025-06-19 09:21:27.246338 09:27:20 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.301997 09:27:20 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.350189 09:27:20 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.41157 09:27:20 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.465076 09:27:20 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.519535 09:27:20 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.572493 09:27:20 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.614024 09:27:20 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1906250921261500u | 1 | 2025-06-19 09:21:27.658164 09:27:20 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1906250921261600u | 1 | 2025-06-19 09:21:27.708249 09:27:20 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1906250921261600u | 1 | 2025-06-19 09:21:27.760391 09:27:20 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1906250921261601u | 1 | 2025-06-19 09:21:27.81137 09:27:20 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1906250921261601u | 1 | 2025-06-19 09:21:27.861739 09:27:20 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1906250921261700u | 1 | 2025-06-19 09:21:27.914828 09:27:20 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1906250921261700u | 1 | 2025-06-19 09:21:27.968937 09:27:20 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1906250921261700u | 1 | 2025-06-19 09:21:28.02532 09:27:20 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.083427 09:27:20 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.132098 09:27:20 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.186555 09:27:20 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.239245 09:27:20 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.29331 09:27:20 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.347766 09:27:20 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.399493 09:27:20 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.450586 09:27:20 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1906250921261701u | 1 | 2025-06-19 09:21:28.494562 09:27:20 policy-db-migrator | (37 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | clampacm: OK @ 1701 09:27:20 policy-db-migrator | Initializing pooling... 09:27:20 policy-db-migrator | 4 blocks 09:27:20 policy-db-migrator | Preparing upgrade release version: 1600 09:27:20 policy-db-migrator | Done 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | ---------+--------- 09:27:20 policy-db-migrator | pooling | 0 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:27:20 policy-db-migrator | (0 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | pooling: upgrade available: 0 -> 1600 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | upgrade: 0 -> 1600 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-distributed.locking.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | pooling: OK: upgrade (1600) 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | ---------+--------- 09:27:20 policy-db-migrator | pooling | 1600 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+--------------------------- 09:27:20 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1906250921291600u | 1 | 2025-06-19 09:21:29.16401 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | pooling: OK @ 1600 09:27:20 policy-db-migrator | Initializing operationshistory... 09:27:20 policy-db-migrator | 6 blocks 09:27:20 policy-db-migrator | Preparing upgrade release version: 1600 09:27:20 policy-db-migrator | Done 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | -------------------+--------- 09:27:20 policy-db-migrator | operationshistory | 0 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:27:20 policy-db-migrator | (0 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | upgrade: 0 -> 1600 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | rc=0 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | > upgrade 0110-operationshistory.sql 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | CREATE INDEX 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | INSERT 0 1 09:27:20 policy-db-migrator | operationshistory: OK: upgrade (1600) 09:27:20 policy-db-migrator | List of databases 09:27:20 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:27:20 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:27:20 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:27:20 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:27:20 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:27:20 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:27:20 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:27:20 policy-db-migrator | (9 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 09:27:20 policy-db-migrator | CREATE TABLE 09:27:20 policy-db-migrator | name | version 09:27:20 policy-db-migrator | -------------------+--------- 09:27:20 policy-db-migrator | operationshistory | 1600 09:27:20 policy-db-migrator | (1 row) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:27:20 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:27:20 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1906250921291600u | 1 | 2025-06-19 09:21:29.826213 09:27:20 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1906250921291600u | 1 | 2025-06-19 09:21:29.888633 09:27:20 policy-db-migrator | (2 rows) 09:27:20 policy-db-migrator | 09:27:20 policy-db-migrator | operationshistory: OK @ 1600 09:27:20 policy-opa-pdp | Waiting for kafka port 9092... 09:27:20 policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! 09:27:20 policy-opa-pdp | Waiting for pap port 6969... 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:27:20 policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=debug msg="###################################### " 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=debug msg="OPA-PDP: Starting initialisation " 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=debug msg="###################################### " 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="KAFKA_URL not defined, using default value" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="PAP_TOPIC not defined, using default value" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="PATCH_TOPIC not defined, using default value" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="PATCH_GROUPID not defined, using default value" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="API_USER not defined, using default value" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="API_PASSWORD not defined, using default value" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="UseSASLForKAFKA not defined, using default value" 09:27:20 policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=debug msg="Username: " 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=debug msg="Password: " 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" 09:27:20 policy-opa-pdp | time="2025-06-19T09:22:34Z" level=debug msg="Configuration module: environment initialised" 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:34.0317+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:34.0322+00:00] Name: opa-6446e8da-32c3-48c2-9df7-d65664d9050e 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:34.0353+00:00] Starting OPA PDP Service 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:22:39.0396+00:00] HTTP server started 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:39.0410+00:00] Create an instance of OPA Object 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:39.0411+00:00] Configure an instance of OPA Object 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:39.0422+00:00] Topic start :::: policy-pdp-pap 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:39.0423+00:00] Creating Kafka Consumer singleton instance 09:27:20 policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-19T09:22:39.0451+00:00] Topic Subscribed: policy-pdp-pap 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:39.0451+00:00] Created SIngleton consumer instance 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:39.0579+00:00] Starting PDP Message Listener..... 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:49.0671+00:00] New Ticker started with interval 60000 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:22:59.0728+00:00] After registration successful delay 09:27:20 policy-opa-pdp | 2025/06/19 09:23:49 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.0914+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"03f32307-1284-4774-b130-da249ecd5247","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750325029090","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.0916+00:00] Sending Heartbeat ... 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.1174+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"03f32307-1284-4774-b130-da249ecd5247","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750325029090","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.1175+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.1176+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7336+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","timestampMs":1750325029654,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7338+00:00] messageType: PDP_UPDATE 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7341+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","timestampMs":1750325029654,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7341+00:00] Policy Is Allowed: slice.capacity.check 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7342+00:00] Validating properties data for policy: slice.capacity.check 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7342+00:00] Validating properties policy for policy: slice.capacity.check 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7342+00:00] Validation successful for policy: slice.capacity.check 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7345+00:00] Directory created: /opt/policies/slice/capacity/check 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7345+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7347+00:00] Directory created: /opt/data/node/slice/capacity/check 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7347+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7347+00:00] Before calling combinedoutput 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7583+00:00] Bundle Built Sucessfully.... 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7623+00:00] storage not found creating : /node 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7624+00:00] storage not found creating : /node/slice 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7624+00:00] storage not found creating : /node/slice/capacity 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7624+00:00] storage not found creating : /node/slice/capacity/check 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7625+00:00] PoliciesDeployed Map: { 09:27:20 policy-opa-pdp | "deployed_policies_dict": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:20 policy-opa-pdp | "policy-version": "1.0.0" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7626+00:00] Loaded Policy: slice.capacity.check 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7626+00:00] Processed policies_to_be_deployed successfully 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7627+00:00] Sending PDP Status With Update Response 09:27:20 policy-opa-pdp | 2025/06/19 09:23:49 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7629+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bab848e5-df3c-4f23-b3bc-bca0a9d81cf1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029762","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.7630+00:00] PDP_STATUS Message Sent Successfully 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7630+00:00] 120000 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7631+00:00] New Ticker started with interval 120000 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7783+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bab848e5-df3c-4f23-b3bc-bca0a9d81cf1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029762","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7784+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.7784+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8135+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a6922d7d-589c-46cd-9caa-cee843a203b2","timestampMs":1750325029655,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8136+00:00] messageType: PDP_STATE_CHANGE 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8137+00:00] PDP STATE CHANGE message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a6922d7d-589c-46cd-9caa-cee843a203b2","timestampMs":1750325029655,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8138+00:00] State change from PASSIVE To : ACTIVE 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.8138+00:00] Sending PDP Status With State Change response 09:27:20 policy-opa-pdp | 2025/06/19 09:23:49 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8140+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a6922d7d-589c-46cd-9caa-cee843a203b2","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"b7178b8b-01a6-49dd-ae5b-921dfaf66037","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029813","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:49.8140+00:00] PDP_STATUS With State Change Message Sent Successfully 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8226+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a6922d7d-589c-46cd-9caa-cee843a203b2","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"b7178b8b-01a6-49dd-ae5b-921dfaf66037","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029813","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8226+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:49.8226+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1568+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b173699-25bf-4249-b02b-519f85c7ac42","timestampMs":1750325030140,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1569+00:00] messageType: PDP_UPDATE 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1572+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b173699-25bf-4249-b02b-519f85c7ac42","timestampMs":1750325030140,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:50.1573+00:00] Sending PDP Status With Update Response 09:27:20 policy-opa-pdp | 2025/06/19 09:23:50 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1575+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8b173699-25bf-4249-b02b-519f85c7ac42","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bd283154-4046-4a6a-bbcb-ff7759e8bf7a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325030157","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:23:50.1575+00:00] PDP_STATUS Message Sent Successfully 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1575+00:00] 120000 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1663+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8b173699-25bf-4249-b02b-519f85c7ac42","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bd283154-4046-4a6a-bbcb-ff7759e8bf7a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325030157","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1664+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:23:50.1664+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | 2025/06/19 09:24:49 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:49.0859+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4cf58d2a-0c0c-4b8b-969d-a2a7c9c825a4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325089085","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:49.0860+00:00] Sending Heartbeat ... 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:49.0959+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4cf58d2a-0c0c-4b8b-969d-a2a7c9c825a4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325089085","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:49.0960+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:49.0961+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | WARN[2025-06-19T09:24:59.6403+00:00] Invalid or Missing Request ID 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:59.6404+00:00] Received Health Check message 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:24:59.6474+00:00] PDP received a request to get data through API 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:59.6475+00:00] datapath to get Data : / 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:24:59.6476+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1273+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","timestampMs":1750325101048,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1274+00:00] messageType: PDP_UPDATE 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1286+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","timestampMs":1750325101048,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1286+00:00] Check if Policy is Already Deployed: { 09:27:20 policy-opa-pdp | "deployed_policies_dict": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:20 policy-opa-pdp | "policy-version": "1.0.0" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1286+00:00] Policy is new and should be deployed: zoneB 1.0.6 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1287+00:00] Policy Is Allowed: zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1287+00:00] Validating properties data for policy: zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1287+00:00] Validating properties policy for policy: zoneB 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1287+00:00] Validation successful for policy: zoneB 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1289+00:00] Directory created: /opt/policies/zoneB 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1290+00:00] Policy file saved: /opt/policies/zoneB/policy.rego 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1291+00:00] Directory created: /opt/data/node/zoneB 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1292+00:00] Data file saved: /opt/data/node/zoneB/data.json 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1292+00:00] Before calling combinedoutput 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1525+00:00] Bundle Built Sucessfully.... 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1558+00:00] storage not found creating : /node/zoneB 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1559+00:00] PoliciesDeployed Map: { 09:27:20 policy-opa-pdp | "deployed_policies_dict": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:20 policy-opa-pdp | "policy-version": "1.0.0" 09:27:20 policy-opa-pdp | }, 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.zoneB" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "zoneB" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "zoneB", 09:27:20 policy-opa-pdp | "policy-version": "1.0.6" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1559+00:00] Loaded Policy: zoneB 09:27:20 policy-opa-pdp | 2025/06/19 09:25:01 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1559+00:00] Processed policies_to_be_deployed successfully 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1559+00:00] Sending PDP Status With Update Response 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1560+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"13b88216-5f76-48aa-8143-852516149600","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325101155","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:01.1560+00:00] PDP_STATUS Message Sent Successfully 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1560+00:00] 0 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1639+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"13b88216-5f76-48aa-8143-852516149600","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325101155","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1640+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:01.1640+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.3066+00:00] PDP received a request to get data through API 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3067+00:00] datapath to get Data : /node/zoneB/zone 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3068+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3176+00:00] PDP received a decision request. 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3176+00:00] Headers processed for requestId: Unknown 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3182+00:00] Validation successful for request fields 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3183+00:00] SDK making a decision 09:27:20 policy-opa-pdp | {"decision_id":"13fb42e7-b6bf-42b9-b5df-10ef0e785b74","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"127b058e-62a4-4e4d-8cd3-39c05eb84176","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1120,"timer_rego_query_compile_ns":234414,"timer_rego_query_eval_ns":687782,"timer_rego_query_parse_ns":130522,"timer_sdk_decision_eval_ns":1275012},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-19T09:25:25Z","timestamp":"2025-06-19T09:25:25.318402027Z","type":"openpolicyagent.org/decision_logs"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3205+00:00] RAW opa Decision output: 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "ID": "13fb42e7-b6bf-42b9-b5df-10ef0e785b74", 09:27:20 policy-opa-pdp | "Result": { 09:27:20 policy-opa-pdp | "action_is_log_view": true, 09:27:20 policy-opa-pdp | "allow": true, 09:27:20 policy-opa-pdp | "has_zone_access": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "access": "granted", 09:27:20 policy-opa-pdp | "user": "user1" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | }, 09:27:20 policy-opa-pdp | "Provenance": { 09:27:20 policy-opa-pdp | "version": "1.1.0", 09:27:20 policy-opa-pdp | "build_commit": "", 09:27:20 policy-opa-pdp | "build_timestamp": "", 09:27:20 policy-opa-pdp | "build_hostname": "" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3349+00:00] PDP received a decision request. 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3350+00:00] Headers processed for requestId: Unknown 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3353+00:00] Validation successful for request fields 09:27:20 policy-opa-pdp | WARN[2025-06-19T09:25:25.3353+00:00] Policy Name zoeB does not exist 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3426+00:00] PDP received a decision request. 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3427+00:00] Headers processed for requestId: Unknown 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3429+00:00] Validation successful for request fields 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3430+00:00] SDK making a decision 09:27:20 policy-opa-pdp | {"decision_id":"81b67e37-6d48-4024-b0ec-d57c424346f1","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"127b058e-62a4-4e4d-8cd3-39c05eb84176","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":560,"timer_rego_query_eval_ns":296665,"timer_sdk_decision_eval_ns":377327},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-19T09:25:25Z","timestamp":"2025-06-19T09:25:25.343073025Z","type":"openpolicyagent.org/decision_logs"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.3436+00:00] RAW opa Decision output: 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "ID": "81b67e37-6d48-4024-b0ec-d57c424346f1", 09:27:20 policy-opa-pdp | "Result": { 09:27:20 policy-opa-pdp | "action_is_log_view": true, 09:27:20 policy-opa-pdp | "allow": true, 09:27:20 policy-opa-pdp | "has_zone_access": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "access": "granted", 09:27:20 policy-opa-pdp | "user": "user1" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | }, 09:27:20 policy-opa-pdp | "Provenance": { 09:27:20 policy-opa-pdp | "version": "1.1.0", 09:27:20 policy-opa-pdp | "build_commit": "", 09:27:20 policy-opa-pdp | "build_timestamp": "", 09:27:20 policy-opa-pdp | "build_hostname": "" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6886+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","timestampMs":1750325125656,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6887+00:00] messageType: PDP_UPDATE 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6889+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","timestampMs":1750325125656,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.6889+00:00] Found Policies to be undeployed 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.6889+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6890+00:00] Deleting Policy from OPA : /zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6923+00:00] Removing policy directory: /opt/policies/zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6926+00:00] Deleting data from OPA : /node/zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6926+00:00] Analyzing dataPath: /node/zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6926+00:00] Path segments: [ node zoneB] 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6926+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6926+00:00] Removing data directory: /opt/data/node/zoneB 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.6930+00:00] PoliciesDeployed Map: { 09:27:20 policy-opa-pdp | "deployed_policies_dict": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:20 policy-opa-pdp | "policy-version": "1.0.0" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6930+00:00] Policies Map After Undeployment : { 09:27:20 policy-opa-pdp | "deployed_policies_dict": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:20 policy-opa-pdp | "policy-version": "1.0.0" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | 2025/06/19 09:25:25 KafkaProducer or producer produce message 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.6930+00:00] Processed policies_to_be_undeployed successfully 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.6931+00:00] Sending PDP Status With Update Response 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6932+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"523ad858-af5e-4b7e-876b-08d85af2d0f1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325125693","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:25.6932+00:00] PDP_STATUS Message Sent Successfully 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.6932+00:00] 0 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.7009+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"523ad858-af5e-4b7e-876b-08d85af2d0f1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325125693","deploymentInstanceInfo":""} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.7010+00:00] messageType: PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:25.7010+00:00] discarding event of type PDP_STATUS 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9422+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:20 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","timestampMs":1750325126920,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9423+00:00] messageType: PDP_UPDATE 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9425+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","timestampMs":1750325126920,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9425+00:00] Check if Policy is Already Deployed: { 09:27:20 policy-opa-pdp | "deployed_policies_dict": [ 09:27:20 policy-opa-pdp | { 09:27:20 policy-opa-pdp | "data": [ 09:27:20 policy-opa-pdp | "node.slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy": [ 09:27:20 policy-opa-pdp | "slice.capacity.check" 09:27:20 policy-opa-pdp | ], 09:27:20 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:20 policy-opa-pdp | "policy-version": "1.0.0" 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | ] 09:27:20 policy-opa-pdp | } 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:26.9425+00:00] Policy is new and should be deployed: vehicle 1.0.6 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9426+00:00] Policy Is Allowed: vehicle 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9426+00:00] Validating properties data for policy: vehicle 09:27:20 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9426+00:00] Validating properties policy for policy: vehicle 09:27:20 policy-opa-pdp | INFO[2025-06-19T09:25:26.9426+00:00] Validation successful for policy: vehicle 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9429+00:00] Directory created: /opt/policies/vehicle 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9429+00:00] Policy file saved: /opt/policies/vehicle/policy.rego 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9430+00:00] Directory created: /opt/data/node/vehicle 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9431+00:00] Data file saved: /opt/data/node/vehicle/data.json 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9431+00:00] Before calling combinedoutput 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9617+00:00] Bundle Built Sucessfully.... 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9655+00:00] storage not found creating : /node/vehicle 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9656+00:00] PoliciesDeployed Map: { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.vehicle" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "vehicle" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "vehicle", 09:27:21 policy-opa-pdp | "policy-version": "1.0.6" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9656+00:00] Loaded Policy: vehicle 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9657+00:00] Processed policies_to_be_deployed successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9658+00:00] Sending PDP Status With Update Response 09:27:21 policy-opa-pdp | 2025/06/19 09:25:26 KafkaProducer or producer produce message 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9659+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"096b1807-1a23-4459-87bf-e6cffafee28e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325126965","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:26.9659+00:00] PDP_STATUS Message Sent Successfully 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9659+00:00] 0 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9748+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"096b1807-1a23-4459-87bf-e6cffafee28e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325126965","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9749+00:00] messageType: PDP_STATUS 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:26.9749+00:00] discarding event of type PDP_STATUS 09:27:21 policy-opa-pdp | 2025/06/19 09:25:49 KafkaProducer or producer produce message 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:49.7782+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cd490f88-9ebe-44e4-8a7d-e586e40c3d22","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325149778","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:49.7783+00:00] Sending Heartbeat ... 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:49.7868+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cd490f88-9ebe-44e4-8a7d-e586e40c3d22","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325149778","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:49.7869+00:00] messageType: PDP_STATUS 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:49.7870+00:00] discarding event of type PDP_STATUS 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0201+00:00] PDP received a request to get data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0202+00:00] datapath to get Data : /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0203+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0313+00:00] PDP received a request to update data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0316+00:00] All fields are valid! 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0316+00:00] data : [map[op:add path:/round value:trail]] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0316+00:00] policy name : vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0317+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0317+00:00] dirParts : [ node vehicle] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0318+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0318+00:00] root: /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0318+00:00] path : round 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0318+00:00] calling ParsePatchPathEscaped to check the path 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0318+00:00] No path conflicts detected 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0319+00:00] Updated the data in the corresponding path successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0392+00:00] PDP received a request to get data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0393+00:00] datapath to get Data : /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0394+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0492+00:00] PDP received a request to update data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0497+00:00] All fields are valid! 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0497+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0497+00:00] policy name : vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0499+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0499+00:00] dirParts : [ node vehicle] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0499+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0499+00:00] root: /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0499+00:00] path : round 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0500+00:00] calling ParsePatchPathEscaped to check the path 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0501+00:00] No path conflicts detected 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0501+00:00] Updated the data in the corresponding path successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0577+00:00] PDP received a request to get data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0578+00:00] datapath to get Data : /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0578+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0688+00:00] PDP received a request to update data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0697+00:00] All fields are valid! 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0699+00:00] data : [map[op:remove path:/round]] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0701+00:00] policy name : vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0703+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0704+00:00] dirParts : [ node vehicle] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0706+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0708+00:00] root: /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0709+00:00] path : round 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0711+00:00] calling ParsePatchPathEscaped to check the path 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0713+00:00] No path conflicts detected 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0714+00:00] Updated the data in the corresponding path successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.0817+00:00] PDP received a request to get data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0818+00:00] datapath to get Data : /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0819+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0919+00:00] PDP received a decision request. 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0920+00:00] Headers processed for requestId: Unknown 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0923+00:00] Validation successful for request fields 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0924+00:00] SDK making a decision 09:27:21 policy-opa-pdp | {"decision_id":"d2fc1ab8-8e34-474f-ac55-5fba6cb75210","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"127b058e-62a4-4e4d-8cd3-39c05eb84176","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":630,"timer_rego_query_compile_ns":108102,"timer_rego_query_eval_ns":420277,"timer_rego_query_parse_ns":85942,"timer_sdk_decision_eval_ns":743033},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-19T09:25:51Z","timestamp":"2025-06-19T09:25:51.092453726Z","type":"openpolicyagent.org/decision_logs"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.0935+00:00] RAW opa Decision output: 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "ID": "d2fc1ab8-8e34-474f-ac55-5fba6cb75210", 09:27:21 policy-opa-pdp | "Result": { 09:27:21 policy-opa-pdp | "action_is_granted": true, 09:27:21 policy-opa-pdp | "allow": true, 09:27:21 policy-opa-pdp | "user_has_vehicle_access": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "status": "available", 09:27:21 policy-opa-pdp | "type": "car" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | "Provenance": { 09:27:21 policy-opa-pdp | "version": "1.1.0", 09:27:21 policy-opa-pdp | "build_commit": "", 09:27:21 policy-opa-pdp | "build_timestamp": "", 09:27:21 policy-opa-pdp | "build_hostname": "" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1034+00:00] PDP received a decision request. 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1037+00:00] Headers processed for requestId: Unknown 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1042+00:00] Validation successful for request fields 09:27:21 policy-opa-pdp | WARN[2025-06-19T09:25:51.1044+00:00] Policy Name vehile does not exist 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1117+00:00] PDP received a decision request. 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1118+00:00] Headers processed for requestId: Unknown 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1120+00:00] Validation successful for request fields 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1121+00:00] SDK making a decision 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.1128+00:00] RAW opa Decision output: 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "ID": "5eae405f-c547-4f19-9862-b211300794cb", 09:27:21 policy-opa-pdp | "Result": { 09:27:21 policy-opa-pdp | "action_is_granted": true, 09:27:21 policy-opa-pdp | "allow": true, 09:27:21 policy-opa-pdp | "user_has_vehicle_access": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "status": "available", 09:27:21 policy-opa-pdp | "type": "car" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | "Provenance": { 09:27:21 policy-opa-pdp | "version": "1.1.0", 09:27:21 policy-opa-pdp | "build_commit": "", 09:27:21 policy-opa-pdp | "build_timestamp": "", 09:27:21 policy-opa-pdp | "build_hostname": "" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | {"decision_id":"5eae405f-c547-4f19-9862-b211300794cb","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"127b058e-62a4-4e4d-8cd3-39c05eb84176","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":610,"timer_rego_query_eval_ns":377236,"timer_sdk_decision_eval_ns":531339},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-19T09:25:51Z","timestamp":"2025-06-19T09:25:51.112248938Z","type":"openpolicyagent.org/decision_logs"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4200+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"9c9107b1-c799-4c7b-9973-58924a565674","timestampMs":1750325151380,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4204+00:00] messageType: PDP_UPDATE 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4208+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"9c9107b1-c799-4c7b-9973-58924a565674","timestampMs":1750325151380,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.4210+00:00] Found Policies to be undeployed 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.4215+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4217+00:00] Deleting Policy from OPA : /vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4242+00:00] Removing policy directory: /opt/policies/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4248+00:00] Deleting data from OPA : /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4249+00:00] Analyzing dataPath: /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4250+00:00] Path segments: [ node vehicle] 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4251+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4252+00:00] Removing data directory: /opt/data/node/vehicle 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.4254+00:00] PoliciesDeployed Map: { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4255+00:00] Policies Map After Undeployment : { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.4256+00:00] Processed policies_to_be_undeployed successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.4257+00:00] Sending PDP Status With Update Response 09:27:21 policy-opa-pdp | 2025/06/19 09:25:51 KafkaProducer or producer produce message 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4259+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9c9107b1-c799-4c7b-9973-58924a565674","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4f96631e-168c-42a3-be34-01f25a59055b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325151425","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.4260+00:00] PDP_STATUS Message Sent Successfully 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4260+00:00] 0 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4347+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9c9107b1-c799-4c7b-9973-58924a565674","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4f96631e-168c-42a3-be34-01f25a59055b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325151425","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4349+00:00] messageType: PDP_STATUS 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.4349+00:00] discarding event of type PDP_STATUS 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.8008+00:00] PDP received a request to get data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.8010+00:00] datapath to get Data : /node/vehicle 09:27:21 policy-opa-pdp | WARN[2025-06-19T09:25:51.8012+00:00] Error in reading data under /node/vehicle path 09:27:21 policy-opa-pdp | ERRO[2025-06-19T09:25:51.8014+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.8120+00:00] PDP received a request to update data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.8125+00:00] All fields are valid! 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.8129+00:00] data : [map[op:remove path:/round]] 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:51.8131+00:00] policy name : vehicle 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:51.8133+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] 09:27:21 policy-opa-pdp | ERRO[2025-06-19T09:25:51.8135+00:00] Policy associated with the patch request does not exists 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5422+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","timestampMs":1750325152522,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5427+00:00] messageType: PDP_UPDATE 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5432+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","timestampMs":1750325152522,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5434+00:00] Check if Policy is Already Deployed: { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5437+00:00] Policy is new and should be deployed: abac 1.0.7 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5440+00:00] Policy Is Allowed: abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5441+00:00] Validating properties data for policy: abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5443+00:00] Validating properties policy for policy: abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5445+00:00] Validation successful for policy: abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5449+00:00] Directory created: /opt/policies/abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5452+00:00] Policy file saved: /opt/policies/abac/policy.rego 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5455+00:00] Directory created: /opt/data/node/abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5461+00:00] Data file saved: /opt/data/node/abac/data.json 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5463+00:00] Before calling combinedoutput 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5686+00:00] Bundle Built Sucessfully.... 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5727+00:00] storage not found creating : /node/abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5729+00:00] PoliciesDeployed Map: { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.abac" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "abac" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "abac", 09:27:21 policy-opa-pdp | "policy-version": "1.0.7" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5731+00:00] Loaded Policy: abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5732+00:00] Processed policies_to_be_deployed successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5733+00:00] Sending PDP Status With Update Response 09:27:21 policy-opa-pdp | 2025/06/19 09:25:52 KafkaProducer or producer produce message 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5736+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"354bbffa-4b61-43cc-ab43-09996db2b0f3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325152573","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:25:52.5736+00:00] PDP_STATUS Message Sent Successfully 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5737+00:00] 0 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5821+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"354bbffa-4b61-43cc-ab43-09996db2b0f3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325152573","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5823+00:00] messageType: PDP_STATUS 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:25:52.5824+00:00] discarding event of type PDP_STATUS 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:16.6125+00:00] PDP received a request to get data through API 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6126+00:00] datapath to get Data : /node/abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6127+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6284+00:00] PDP received a decision request. 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6286+00:00] Headers processed for requestId: Unknown 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6289+00:00] Validation successful for request fields 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6290+00:00] SDK making a decision 09:27:21 policy-opa-pdp | {"decision_id":"a9cbe8da-a44d-47b7-ab3b-91fdd67c63c0","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"127b058e-62a4-4e4d-8cd3-39c05eb84176","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":850,"timer_rego_query_compile_ns":150863,"timer_rego_query_eval_ns":812333,"timer_rego_query_parse_ns":111602,"timer_sdk_decision_eval_ns":1335271},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-19T09:26:16Z","timestamp":"2025-06-19T09:26:16.629154656Z","type":"openpolicyagent.org/decision_logs"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6312+00:00] RAW opa Decision output: 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "ID": "a9cbe8da-a44d-47b7-ab3b-91fdd67c63c0", 09:27:21 policy-opa-pdp | "Result": { 09:27:21 policy-opa-pdp | "action_is_read": true, 09:27:21 policy-opa-pdp | "allow": true, 09:27:21 policy-opa-pdp | "viewable_sensor_data": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Galle", 09:27:21 policy-opa-pdp | "precipitation": "500 mm", 09:27:21 policy-opa-pdp | "temperature": "35 C", 09:27:21 policy-opa-pdp | "windspeed": "7.2 m/s" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Jaffna", 09:27:21 policy-opa-pdp | "precipitation": "300 mm", 09:27:21 policy-opa-pdp | "temperature": "-5 C", 09:27:21 policy-opa-pdp | "windspeed": "3.8 m/s" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Nuwara Eliya", 09:27:21 policy-opa-pdp | "precipitation": "600 mm", 09:27:21 policy-opa-pdp | "temperature": "25 C", 09:27:21 policy-opa-pdp | "windspeed": "4.0 m/s" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Trincomalee", 09:27:21 policy-opa-pdp | "precipitation": "1000 mm", 09:27:21 policy-opa-pdp | "temperature": "20 C", 09:27:21 policy-opa-pdp | "windspeed": "5.0 m/s" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | "Provenance": { 09:27:21 policy-opa-pdp | "version": "1.1.0", 09:27:21 policy-opa-pdp | "build_commit": "", 09:27:21 policy-opa-pdp | "build_timestamp": "", 09:27:21 policy-opa-pdp | "build_hostname": "" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6398+00:00] PDP received a decision request. 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6398+00:00] Headers processed for requestId: Unknown 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6400+00:00] Validation successful for request fields 09:27:21 policy-opa-pdp | WARN[2025-06-19T09:26:16.6401+00:00] Policy Name abc does not exist 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6499+00:00] PDP received a decision request. 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6500+00:00] Headers processed for requestId: Unknown 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6504+00:00] Validation successful for request fields 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6505+00:00] SDK making a decision 09:27:21 policy-opa-pdp | {"decision_id":"e568f4d3-56b9-48c3-b08a-6b98ce0c4b06","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"127b058e-62a4-4e4d-8cd3-39c05eb84176","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":760,"timer_rego_query_eval_ns":708262,"timer_sdk_decision_eval_ns":819263},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-19T09:26:16Z","timestamp":"2025-06-19T09:26:16.650594577Z","type":"openpolicyagent.org/decision_logs"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:16.6517+00:00] RAW opa Decision output: 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "ID": "e568f4d3-56b9-48c3-b08a-6b98ce0c4b06", 09:27:21 policy-opa-pdp | "Result": { 09:27:21 policy-opa-pdp | "action_is_read": true, 09:27:21 policy-opa-pdp | "allow": true, 09:27:21 policy-opa-pdp | "viewable_sensor_data": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Galle", 09:27:21 policy-opa-pdp | "precipitation": "500 mm", 09:27:21 policy-opa-pdp | "temperature": "35 C", 09:27:21 policy-opa-pdp | "windspeed": "7.2 m/s" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Jaffna", 09:27:21 policy-opa-pdp | "precipitation": "300 mm", 09:27:21 policy-opa-pdp | "temperature": "-5 C", 09:27:21 policy-opa-pdp | "windspeed": "3.8 m/s" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Nuwara Eliya", 09:27:21 policy-opa-pdp | "precipitation": "600 mm", 09:27:21 policy-opa-pdp | "temperature": "25 C", 09:27:21 policy-opa-pdp | "windspeed": "4.0 m/s" 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "location": "Trincomalee", 09:27:21 policy-opa-pdp | "precipitation": "1000 mm", 09:27:21 policy-opa-pdp | "temperature": "20 C", 09:27:21 policy-opa-pdp | "windspeed": "5.0 m/s" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | }, 09:27:21 policy-opa-pdp | "Provenance": { 09:27:21 policy-opa-pdp | "version": "1.1.0", 09:27:21 policy-opa-pdp | "build_commit": "", 09:27:21 policy-opa-pdp | "build_timestamp": "", 09:27:21 policy-opa-pdp | "build_hostname": "" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2809+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8652fa24-c749-4491-9f90-13545ab0a216","timestampMs":1750325177236,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2811+00:00] messageType: PDP_UPDATE 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2813+00:00] PDP_UPDATE Message received: {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8652fa24-c749-4491-9f90-13545ab0a216","timestampMs":1750325177236,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:17.2813+00:00] Found Policies to be undeployed 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:17.2813+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2814+00:00] Deleting Policy from OPA : /abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2840+00:00] Removing policy directory: /opt/policies/abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2843+00:00] Deleting data from OPA : /node/abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2844+00:00] Analyzing dataPath: /node/abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2844+00:00] Path segments: [ node abac] 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2845+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2846+00:00] Removing data directory: /opt/data/node/abac 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:17.2849+00:00] PoliciesDeployed Map: { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2849+00:00] Policies Map After Undeployment : { 09:27:21 policy-opa-pdp | "deployed_policies_dict": [ 09:27:21 policy-opa-pdp | { 09:27:21 policy-opa-pdp | "data": [ 09:27:21 policy-opa-pdp | "node.slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy": [ 09:27:21 policy-opa-pdp | "slice.capacity.check" 09:27:21 policy-opa-pdp | ], 09:27:21 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:27:21 policy-opa-pdp | "policy-version": "1.0.0" 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | ] 09:27:21 policy-opa-pdp | } 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:17.2851+00:00] Processed policies_to_be_undeployed successfully 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:17.2851+00:00] Sending PDP Status With Update Response 09:27:21 policy-opa-pdp | 2025/06/19 09:26:17 KafkaProducer or producer produce message 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2853+00:00] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8652fa24-c749-4491-9f90-13545ab0a216","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cbe4315c-17a7-4235-be3e-5590dfdb0738","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325177285","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | INFO[2025-06-19T09:26:17.2853+00:00] PDP_STATUS Message Sent Successfully 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2853+00:00] 0 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2931+00:00] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8652fa24-c749-4491-9f90-13545ab0a216","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cbe4315c-17a7-4235-be3e-5590dfdb0738","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325177285","deploymentInstanceInfo":""} 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2932+00:00] messageType: PDP_STATUS 09:27:21 policy-opa-pdp | DEBU[2025-06-19T09:26:17.2932+00:00] discarding event of type PDP_STATUS 09:27:21 policy-pap | Waiting for api port 6969... 09:27:21 policy-pap | api (172.17.0.8:6969) open 09:27:21 policy-pap | Waiting for kafka port 9092... 09:27:21 policy-pap | kafka (172.17.0.5:9092) open 09:27:21 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 09:27:21 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 09:27:21 policy-pap | 09:27:21 policy-pap | . ____ _ __ _ _ 09:27:21 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:27:21 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:27:21 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:27:21 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 09:27:21 policy-pap | =========|_|==============|___/=/_/_/_/ 09:27:21 policy-pap | 09:27:21 policy-pap | :: Spring Boot :: (v3.4.6) 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:43.772+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 75 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 09:27:21 policy-pap | [2025-06-19T09:21:43.776+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 09:27:21 policy-pap | [2025-06-19T09:21:45.301+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:27:21 policy-pap | [2025-06-19T09:21:45.399+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 7 JPA repository interfaces. 09:27:21 policy-pap | [2025-06-19T09:21:46.395+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 09:27:21 policy-pap | [2025-06-19T09:21:46.408+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:27:21 policy-pap | [2025-06-19T09:21:46.411+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:27:21 policy-pap | [2025-06-19T09:21:46.411+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 09:27:21 policy-pap | [2025-06-19T09:21:46.467+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 09:27:21 policy-pap | [2025-06-19T09:21:46.467+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2622 ms 09:27:21 policy-pap | [2025-06-19T09:21:46.909+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:27:21 policy-pap | [2025-06-19T09:21:46.985+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 09:27:21 policy-pap | [2025-06-19T09:21:47.033+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:27:21 policy-pap | [2025-06-19T09:21:47.467+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:27:21 policy-pap | [2025-06-19T09:21:47.515+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:27:21 policy-pap | [2025-06-19T09:21:47.772+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@53a16dd6 09:27:21 policy-pap | [2025-06-19T09:21:47.774+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:27:21 policy-pap | [2025-06-19T09:21:47.879+00:00|INFO|pooling|main] HHH10001005: Database info: 09:27:21 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 09:27:21 policy-pap | Database driver: undefined/unknown 09:27:21 policy-pap | Database version: 16.4 09:27:21 policy-pap | Autocommit mode: undefined/unknown 09:27:21 policy-pap | Isolation level: undefined/unknown 09:27:21 policy-pap | Minimum pool size: undefined/unknown 09:27:21 policy-pap | Maximum pool size: undefined/unknown 09:27:21 policy-pap | [2025-06-19T09:21:49.931+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:27:21 policy-pap | [2025-06-19T09:21:49.936+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:27:21 policy-pap | [2025-06-19T09:21:51.495+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:27:21 policy-pap | allow.auto.create.topics = true 09:27:21 policy-pap | auto.commit.interval.ms = 5000 09:27:21 policy-pap | auto.include.jmx.reporter = true 09:27:21 policy-pap | auto.offset.reset = latest 09:27:21 policy-pap | bootstrap.servers = [kafka:9092] 09:27:21 policy-pap | check.crcs = true 09:27:21 policy-pap | client.dns.lookup = use_all_dns_ips 09:27:21 policy-pap | client.id = consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-1 09:27:21 policy-pap | client.rack = 09:27:21 policy-pap | connections.max.idle.ms = 540000 09:27:21 policy-pap | default.api.timeout.ms = 60000 09:27:21 policy-pap | enable.auto.commit = true 09:27:21 policy-pap | enable.metrics.push = true 09:27:21 policy-pap | exclude.internal.topics = true 09:27:21 policy-pap | fetch.max.bytes = 52428800 09:27:21 policy-pap | fetch.max.wait.ms = 500 09:27:21 policy-pap | fetch.min.bytes = 1 09:27:21 policy-pap | group.id = c6a8bb97-2c08-4522-a637-4a0267c3b861 09:27:21 policy-pap | group.instance.id = null 09:27:21 policy-pap | group.protocol = classic 09:27:21 policy-pap | group.remote.assignor = null 09:27:21 policy-pap | heartbeat.interval.ms = 3000 09:27:21 policy-pap | interceptor.classes = [] 09:27:21 policy-pap | internal.leave.group.on.close = true 09:27:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:27:21 policy-pap | isolation.level = read_uncommitted 09:27:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | max.partition.fetch.bytes = 1048576 09:27:21 policy-pap | max.poll.interval.ms = 300000 09:27:21 policy-pap | max.poll.records = 500 09:27:21 policy-pap | metadata.max.age.ms = 300000 09:27:21 policy-pap | metadata.recovery.strategy = none 09:27:21 policy-pap | metric.reporters = [] 09:27:21 policy-pap | metrics.num.samples = 2 09:27:21 policy-pap | metrics.recording.level = INFO 09:27:21 policy-pap | metrics.sample.window.ms = 30000 09:27:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:27:21 policy-pap | receive.buffer.bytes = 65536 09:27:21 policy-pap | reconnect.backoff.max.ms = 1000 09:27:21 policy-pap | reconnect.backoff.ms = 50 09:27:21 policy-pap | request.timeout.ms = 30000 09:27:21 policy-pap | retry.backoff.max.ms = 1000 09:27:21 policy-pap | retry.backoff.ms = 100 09:27:21 policy-pap | sasl.client.callback.handler.class = null 09:27:21 policy-pap | sasl.jaas.config = null 09:27:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:27:21 policy-pap | sasl.kerberos.service.name = null 09:27:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:21 policy-pap | sasl.login.callback.handler.class = null 09:27:21 policy-pap | sasl.login.class = null 09:27:21 policy-pap | sasl.login.connect.timeout.ms = null 09:27:21 policy-pap | sasl.login.read.timeout.ms = null 09:27:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:27:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:27:21 policy-pap | sasl.login.refresh.window.factor = 0.8 09:27:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:27:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.login.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.mechanism = GSSAPI 09:27:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:27:21 policy-pap | sasl.oauthbearer.expected.audience = null 09:27:21 policy-pap | sasl.oauthbearer.expected.issuer = null 09:27:21 policy-pap | sasl.oauthbearer.header.urlencode = false 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:27:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:27:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:27:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:27:21 policy-pap | security.protocol = PLAINTEXT 09:27:21 policy-pap | security.providers = null 09:27:21 policy-pap | send.buffer.bytes = 131072 09:27:21 policy-pap | session.timeout.ms = 45000 09:27:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:27:21 policy-pap | socket.connection.setup.timeout.ms = 10000 09:27:21 policy-pap | ssl.cipher.suites = null 09:27:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:21 policy-pap | ssl.endpoint.identification.algorithm = https 09:27:21 policy-pap | ssl.engine.factory.class = null 09:27:21 policy-pap | ssl.key.password = null 09:27:21 policy-pap | ssl.keymanager.algorithm = SunX509 09:27:21 policy-pap | ssl.keystore.certificate.chain = null 09:27:21 policy-pap | ssl.keystore.key = null 09:27:21 policy-pap | ssl.keystore.location = null 09:27:21 policy-pap | ssl.keystore.password = null 09:27:21 policy-pap | ssl.keystore.type = JKS 09:27:21 policy-pap | ssl.protocol = TLSv1.3 09:27:21 policy-pap | ssl.provider = null 09:27:21 policy-pap | ssl.secure.random.implementation = null 09:27:21 policy-pap | ssl.trustmanager.algorithm = PKIX 09:27:21 policy-pap | ssl.truststore.certificates = null 09:27:21 policy-pap | ssl.truststore.location = null 09:27:21 policy-pap | ssl.truststore.password = null 09:27:21 policy-pap | ssl.truststore.type = JKS 09:27:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:51.551+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:27:21 policy-pap | [2025-06-19T09:21:51.701+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:27:21 policy-pap | [2025-06-19T09:21:51.701+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:27:21 policy-pap | [2025-06-19T09:21:51.701+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750324911699 09:27:21 policy-pap | [2025-06-19T09:21:51.704+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-1, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Subscribed to topic(s): policy-pdp-pap 09:27:21 policy-pap | [2025-06-19T09:21:51.704+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:27:21 policy-pap | allow.auto.create.topics = true 09:27:21 policy-pap | auto.commit.interval.ms = 5000 09:27:21 policy-pap | auto.include.jmx.reporter = true 09:27:21 policy-pap | auto.offset.reset = latest 09:27:21 policy-pap | bootstrap.servers = [kafka:9092] 09:27:21 policy-pap | check.crcs = true 09:27:21 policy-pap | client.dns.lookup = use_all_dns_ips 09:27:21 policy-pap | client.id = consumer-policy-pap-2 09:27:21 policy-pap | client.rack = 09:27:21 policy-pap | connections.max.idle.ms = 540000 09:27:21 policy-pap | default.api.timeout.ms = 60000 09:27:21 policy-pap | enable.auto.commit = true 09:27:21 policy-pap | enable.metrics.push = true 09:27:21 policy-pap | exclude.internal.topics = true 09:27:21 policy-pap | fetch.max.bytes = 52428800 09:27:21 policy-pap | fetch.max.wait.ms = 500 09:27:21 policy-pap | fetch.min.bytes = 1 09:27:21 policy-pap | group.id = policy-pap 09:27:21 policy-pap | group.instance.id = null 09:27:21 policy-pap | group.protocol = classic 09:27:21 policy-pap | group.remote.assignor = null 09:27:21 policy-pap | heartbeat.interval.ms = 3000 09:27:21 policy-pap | interceptor.classes = [] 09:27:21 policy-pap | internal.leave.group.on.close = true 09:27:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:27:21 policy-pap | isolation.level = read_uncommitted 09:27:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | max.partition.fetch.bytes = 1048576 09:27:21 policy-pap | max.poll.interval.ms = 300000 09:27:21 policy-pap | max.poll.records = 500 09:27:21 policy-pap | metadata.max.age.ms = 300000 09:27:21 policy-pap | metadata.recovery.strategy = none 09:27:21 policy-pap | metric.reporters = [] 09:27:21 policy-pap | metrics.num.samples = 2 09:27:21 policy-pap | metrics.recording.level = INFO 09:27:21 policy-pap | metrics.sample.window.ms = 30000 09:27:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:27:21 policy-pap | receive.buffer.bytes = 65536 09:27:21 policy-pap | reconnect.backoff.max.ms = 1000 09:27:21 policy-pap | reconnect.backoff.ms = 50 09:27:21 policy-pap | request.timeout.ms = 30000 09:27:21 policy-pap | retry.backoff.max.ms = 1000 09:27:21 policy-pap | retry.backoff.ms = 100 09:27:21 policy-pap | sasl.client.callback.handler.class = null 09:27:21 policy-pap | sasl.jaas.config = null 09:27:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:27:21 policy-pap | sasl.kerberos.service.name = null 09:27:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:21 policy-pap | sasl.login.callback.handler.class = null 09:27:21 policy-pap | sasl.login.class = null 09:27:21 policy-pap | sasl.login.connect.timeout.ms = null 09:27:21 policy-pap | sasl.login.read.timeout.ms = null 09:27:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:27:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:27:21 policy-pap | sasl.login.refresh.window.factor = 0.8 09:27:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:27:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.login.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.mechanism = GSSAPI 09:27:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:27:21 policy-pap | sasl.oauthbearer.expected.audience = null 09:27:21 policy-pap | sasl.oauthbearer.expected.issuer = null 09:27:21 policy-pap | sasl.oauthbearer.header.urlencode = false 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:27:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:27:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:27:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:27:21 policy-pap | security.protocol = PLAINTEXT 09:27:21 policy-pap | security.providers = null 09:27:21 policy-pap | send.buffer.bytes = 131072 09:27:21 policy-pap | session.timeout.ms = 45000 09:27:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:27:21 policy-pap | socket.connection.setup.timeout.ms = 10000 09:27:21 policy-pap | ssl.cipher.suites = null 09:27:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:21 policy-pap | ssl.endpoint.identification.algorithm = https 09:27:21 policy-pap | ssl.engine.factory.class = null 09:27:21 policy-pap | ssl.key.password = null 09:27:21 policy-pap | ssl.keymanager.algorithm = SunX509 09:27:21 policy-pap | ssl.keystore.certificate.chain = null 09:27:21 policy-pap | ssl.keystore.key = null 09:27:21 policy-pap | ssl.keystore.location = null 09:27:21 policy-pap | ssl.keystore.password = null 09:27:21 policy-pap | ssl.keystore.type = JKS 09:27:21 policy-pap | ssl.protocol = TLSv1.3 09:27:21 policy-pap | ssl.provider = null 09:27:21 policy-pap | ssl.secure.random.implementation = null 09:27:21 policy-pap | ssl.trustmanager.algorithm = PKIX 09:27:21 policy-pap | ssl.truststore.certificates = null 09:27:21 policy-pap | ssl.truststore.location = null 09:27:21 policy-pap | ssl.truststore.password = null 09:27:21 policy-pap | ssl.truststore.type = JKS 09:27:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:51.705+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:27:21 policy-pap | [2025-06-19T09:21:51.713+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:27:21 policy-pap | [2025-06-19T09:21:51.713+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:27:21 policy-pap | [2025-06-19T09:21:51.713+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750324911713 09:27:21 policy-pap | [2025-06-19T09:21:51.713+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:27:21 policy-pap | [2025-06-19T09:21:52.082+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 09:27:21 policy-pap | [2025-06-19T09:21:52.230+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:27:21 policy-pap | [2025-06-19T09:21:52.310+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 09:27:21 policy-pap | [2025-06-19T09:21:52.520+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 09:27:21 policy-pap | [2025-06-19T09:21:53.273+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 09:27:21 policy-pap | [2025-06-19T09:21:53.398+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:27:21 policy-pap | [2025-06-19T09:21:53.426+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 09:27:21 policy-pap | [2025-06-19T09:21:53.449+00:00|INFO|ServiceManager|main] Policy PAP starting 09:27:21 policy-pap | [2025-06-19T09:21:53.449+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 09:27:21 policy-pap | [2025-06-19T09:21:53.449+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 09:27:21 policy-pap | [2025-06-19T09:21:53.450+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 09:27:21 policy-pap | [2025-06-19T09:21:53.450+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 09:27:21 policy-pap | [2025-06-19T09:21:53.450+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 09:27:21 policy-pap | [2025-06-19T09:21:53.450+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 09:27:21 policy-pap | [2025-06-19T09:21:53.452+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c6a8bb97-2c08-4522-a637-4a0267c3b861, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@753bfb4b 09:27:21 policy-pap | [2025-06-19T09:21:53.462+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c6a8bb97-2c08-4522-a637-4a0267c3b861, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:27:21 policy-pap | [2025-06-19T09:21:53.463+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:27:21 policy-pap | allow.auto.create.topics = true 09:27:21 policy-pap | auto.commit.interval.ms = 5000 09:27:21 policy-pap | auto.include.jmx.reporter = true 09:27:21 policy-pap | auto.offset.reset = latest 09:27:21 policy-pap | bootstrap.servers = [kafka:9092] 09:27:21 policy-pap | check.crcs = true 09:27:21 policy-pap | client.dns.lookup = use_all_dns_ips 09:27:21 policy-pap | client.id = consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3 09:27:21 policy-pap | client.rack = 09:27:21 policy-pap | connections.max.idle.ms = 540000 09:27:21 policy-pap | default.api.timeout.ms = 60000 09:27:21 policy-pap | enable.auto.commit = true 09:27:21 policy-pap | enable.metrics.push = true 09:27:21 policy-pap | exclude.internal.topics = true 09:27:21 policy-pap | fetch.max.bytes = 52428800 09:27:21 policy-pap | fetch.max.wait.ms = 500 09:27:21 policy-pap | fetch.min.bytes = 1 09:27:21 policy-pap | group.id = c6a8bb97-2c08-4522-a637-4a0267c3b861 09:27:21 policy-pap | group.instance.id = null 09:27:21 policy-pap | group.protocol = classic 09:27:21 policy-pap | group.remote.assignor = null 09:27:21 policy-pap | heartbeat.interval.ms = 3000 09:27:21 policy-pap | interceptor.classes = [] 09:27:21 policy-pap | internal.leave.group.on.close = true 09:27:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:27:21 policy-pap | isolation.level = read_uncommitted 09:27:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | max.partition.fetch.bytes = 1048576 09:27:21 policy-pap | max.poll.interval.ms = 300000 09:27:21 policy-pap | max.poll.records = 500 09:27:21 policy-pap | metadata.max.age.ms = 300000 09:27:21 policy-pap | metadata.recovery.strategy = none 09:27:21 policy-pap | metric.reporters = [] 09:27:21 policy-pap | metrics.num.samples = 2 09:27:21 policy-pap | metrics.recording.level = INFO 09:27:21 policy-pap | metrics.sample.window.ms = 30000 09:27:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:27:21 policy-pap | receive.buffer.bytes = 65536 09:27:21 policy-pap | reconnect.backoff.max.ms = 1000 09:27:21 policy-pap | reconnect.backoff.ms = 50 09:27:21 policy-pap | request.timeout.ms = 30000 09:27:21 policy-pap | retry.backoff.max.ms = 1000 09:27:21 policy-pap | retry.backoff.ms = 100 09:27:21 policy-pap | sasl.client.callback.handler.class = null 09:27:21 policy-pap | sasl.jaas.config = null 09:27:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:27:21 policy-pap | sasl.kerberos.service.name = null 09:27:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:21 policy-pap | sasl.login.callback.handler.class = null 09:27:21 policy-pap | sasl.login.class = null 09:27:21 policy-pap | sasl.login.connect.timeout.ms = null 09:27:21 policy-pap | sasl.login.read.timeout.ms = null 09:27:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:27:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:27:21 policy-pap | sasl.login.refresh.window.factor = 0.8 09:27:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:27:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.login.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.mechanism = GSSAPI 09:27:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:27:21 policy-pap | sasl.oauthbearer.expected.audience = null 09:27:21 policy-pap | sasl.oauthbearer.expected.issuer = null 09:27:21 policy-pap | sasl.oauthbearer.header.urlencode = false 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:27:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:27:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:27:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:27:21 policy-pap | security.protocol = PLAINTEXT 09:27:21 policy-pap | security.providers = null 09:27:21 policy-pap | send.buffer.bytes = 131072 09:27:21 policy-pap | session.timeout.ms = 45000 09:27:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:27:21 policy-pap | socket.connection.setup.timeout.ms = 10000 09:27:21 policy-pap | ssl.cipher.suites = null 09:27:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:21 policy-pap | ssl.endpoint.identification.algorithm = https 09:27:21 policy-pap | ssl.engine.factory.class = null 09:27:21 policy-pap | ssl.key.password = null 09:27:21 policy-pap | ssl.keymanager.algorithm = SunX509 09:27:21 policy-pap | ssl.keystore.certificate.chain = null 09:27:21 policy-pap | ssl.keystore.key = null 09:27:21 policy-pap | ssl.keystore.location = null 09:27:21 policy-pap | ssl.keystore.password = null 09:27:21 policy-pap | ssl.keystore.type = JKS 09:27:21 policy-pap | ssl.protocol = TLSv1.3 09:27:21 policy-pap | ssl.provider = null 09:27:21 policy-pap | ssl.secure.random.implementation = null 09:27:21 policy-pap | ssl.trustmanager.algorithm = PKIX 09:27:21 policy-pap | ssl.truststore.certificates = null 09:27:21 policy-pap | ssl.truststore.location = null 09:27:21 policy-pap | ssl.truststore.password = null 09:27:21 policy-pap | ssl.truststore.type = JKS 09:27:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:53.463+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:27:21 policy-pap | [2025-06-19T09:21:53.470+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:27:21 policy-pap | [2025-06-19T09:21:53.470+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:27:21 policy-pap | [2025-06-19T09:21:53.470+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750324913470 09:27:21 policy-pap | [2025-06-19T09:21:53.470+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Subscribed to topic(s): policy-pdp-pap 09:27:21 policy-pap | [2025-06-19T09:21:53.470+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 09:27:21 policy-pap | [2025-06-19T09:21:53.470+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ae443848-4985-42ef-8b9b-1cd450680602, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@438cb294 09:27:21 policy-pap | [2025-06-19T09:21:53.471+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ae443848-4985-42ef-8b9b-1cd450680602, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:27:21 policy-pap | [2025-06-19T09:21:53.471+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:27:21 policy-pap | allow.auto.create.topics = true 09:27:21 policy-pap | auto.commit.interval.ms = 5000 09:27:21 policy-pap | auto.include.jmx.reporter = true 09:27:21 policy-pap | auto.offset.reset = latest 09:27:21 policy-pap | bootstrap.servers = [kafka:9092] 09:27:21 policy-pap | check.crcs = true 09:27:21 policy-pap | client.dns.lookup = use_all_dns_ips 09:27:21 policy-pap | client.id = consumer-policy-pap-4 09:27:21 policy-pap | client.rack = 09:27:21 policy-pap | connections.max.idle.ms = 540000 09:27:21 policy-pap | default.api.timeout.ms = 60000 09:27:21 policy-pap | enable.auto.commit = true 09:27:21 policy-pap | enable.metrics.push = true 09:27:21 policy-pap | exclude.internal.topics = true 09:27:21 policy-pap | fetch.max.bytes = 52428800 09:27:21 policy-pap | fetch.max.wait.ms = 500 09:27:21 policy-pap | fetch.min.bytes = 1 09:27:21 policy-pap | group.id = policy-pap 09:27:21 policy-pap | group.instance.id = null 09:27:21 policy-pap | group.protocol = classic 09:27:21 policy-pap | group.remote.assignor = null 09:27:21 policy-pap | heartbeat.interval.ms = 3000 09:27:21 policy-pap | interceptor.classes = [] 09:27:21 policy-pap | internal.leave.group.on.close = true 09:27:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:27:21 policy-pap | isolation.level = read_uncommitted 09:27:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | max.partition.fetch.bytes = 1048576 09:27:21 policy-pap | max.poll.interval.ms = 300000 09:27:21 policy-pap | max.poll.records = 500 09:27:21 policy-pap | metadata.max.age.ms = 300000 09:27:21 policy-pap | metadata.recovery.strategy = none 09:27:21 policy-pap | metric.reporters = [] 09:27:21 policy-pap | metrics.num.samples = 2 09:27:21 policy-pap | metrics.recording.level = INFO 09:27:21 policy-pap | metrics.sample.window.ms = 30000 09:27:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:27:21 policy-pap | receive.buffer.bytes = 65536 09:27:21 policy-pap | reconnect.backoff.max.ms = 1000 09:27:21 policy-pap | reconnect.backoff.ms = 50 09:27:21 policy-pap | request.timeout.ms = 30000 09:27:21 policy-pap | retry.backoff.max.ms = 1000 09:27:21 policy-pap | retry.backoff.ms = 100 09:27:21 policy-pap | sasl.client.callback.handler.class = null 09:27:21 policy-pap | sasl.jaas.config = null 09:27:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:27:21 policy-pap | sasl.kerberos.service.name = null 09:27:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:21 policy-pap | sasl.login.callback.handler.class = null 09:27:21 policy-pap | sasl.login.class = null 09:27:21 policy-pap | sasl.login.connect.timeout.ms = null 09:27:21 policy-pap | sasl.login.read.timeout.ms = null 09:27:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:27:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:27:21 policy-pap | sasl.login.refresh.window.factor = 0.8 09:27:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:27:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.login.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.mechanism = GSSAPI 09:27:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:27:21 policy-pap | sasl.oauthbearer.expected.audience = null 09:27:21 policy-pap | sasl.oauthbearer.expected.issuer = null 09:27:21 policy-pap | sasl.oauthbearer.header.urlencode = false 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:27:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:27:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:27:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:27:21 policy-pap | security.protocol = PLAINTEXT 09:27:21 policy-pap | security.providers = null 09:27:21 policy-pap | send.buffer.bytes = 131072 09:27:21 policy-pap | session.timeout.ms = 45000 09:27:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:27:21 policy-pap | socket.connection.setup.timeout.ms = 10000 09:27:21 policy-pap | ssl.cipher.suites = null 09:27:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:21 policy-pap | ssl.endpoint.identification.algorithm = https 09:27:21 policy-pap | ssl.engine.factory.class = null 09:27:21 policy-pap | ssl.key.password = null 09:27:21 policy-pap | ssl.keymanager.algorithm = SunX509 09:27:21 policy-pap | ssl.keystore.certificate.chain = null 09:27:21 policy-pap | ssl.keystore.key = null 09:27:21 policy-pap | ssl.keystore.location = null 09:27:21 policy-pap | ssl.keystore.password = null 09:27:21 policy-pap | ssl.keystore.type = JKS 09:27:21 policy-pap | ssl.protocol = TLSv1.3 09:27:21 policy-pap | ssl.provider = null 09:27:21 policy-pap | ssl.secure.random.implementation = null 09:27:21 policy-pap | ssl.trustmanager.algorithm = PKIX 09:27:21 policy-pap | ssl.truststore.certificates = null 09:27:21 policy-pap | ssl.truststore.location = null 09:27:21 policy-pap | ssl.truststore.password = null 09:27:21 policy-pap | ssl.truststore.type = JKS 09:27:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:53.471+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:27:21 policy-pap | [2025-06-19T09:21:53.476+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:27:21 policy-pap | [2025-06-19T09:21:53.476+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:27:21 policy-pap | [2025-06-19T09:21:53.476+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750324913476 09:27:21 policy-pap | [2025-06-19T09:21:53.477+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:27:21 policy-pap | [2025-06-19T09:21:53.477+00:00|INFO|ServiceManager|main] Policy PAP starting topics 09:27:21 policy-pap | [2025-06-19T09:21:53.477+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ae443848-4985-42ef-8b9b-1cd450680602, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:27:21 policy-pap | [2025-06-19T09:21:53.477+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c6a8bb97-2c08-4522-a637-4a0267c3b861, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:27:21 policy-pap | [2025-06-19T09:21:53.477+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=441fdadd-256c-4985-84a3-e8d358f4d25d, alive=false, publisher=null]]: starting 09:27:21 policy-pap | [2025-06-19T09:21:53.489+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:27:21 policy-pap | acks = -1 09:27:21 policy-pap | auto.include.jmx.reporter = true 09:27:21 policy-pap | batch.size = 16384 09:27:21 policy-pap | bootstrap.servers = [kafka:9092] 09:27:21 policy-pap | buffer.memory = 33554432 09:27:21 policy-pap | client.dns.lookup = use_all_dns_ips 09:27:21 policy-pap | client.id = producer-1 09:27:21 policy-pap | compression.gzip.level = -1 09:27:21 policy-pap | compression.lz4.level = 9 09:27:21 policy-pap | compression.type = none 09:27:21 policy-pap | compression.zstd.level = 3 09:27:21 policy-pap | connections.max.idle.ms = 540000 09:27:21 policy-pap | delivery.timeout.ms = 120000 09:27:21 policy-pap | enable.idempotence = true 09:27:21 policy-pap | enable.metrics.push = true 09:27:21 policy-pap | interceptor.classes = [] 09:27:21 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:27:21 policy-pap | linger.ms = 0 09:27:21 policy-pap | max.block.ms = 60000 09:27:21 policy-pap | max.in.flight.requests.per.connection = 5 09:27:21 policy-pap | max.request.size = 1048576 09:27:21 policy-pap | metadata.max.age.ms = 300000 09:27:21 policy-pap | metadata.max.idle.ms = 300000 09:27:21 policy-pap | metadata.recovery.strategy = none 09:27:21 policy-pap | metric.reporters = [] 09:27:21 policy-pap | metrics.num.samples = 2 09:27:21 policy-pap | metrics.recording.level = INFO 09:27:21 policy-pap | metrics.sample.window.ms = 30000 09:27:21 policy-pap | partitioner.adaptive.partitioning.enable = true 09:27:21 policy-pap | partitioner.availability.timeout.ms = 0 09:27:21 policy-pap | partitioner.class = null 09:27:21 policy-pap | partitioner.ignore.keys = false 09:27:21 policy-pap | receive.buffer.bytes = 32768 09:27:21 policy-pap | reconnect.backoff.max.ms = 1000 09:27:21 policy-pap | reconnect.backoff.ms = 50 09:27:21 policy-pap | request.timeout.ms = 30000 09:27:21 policy-pap | retries = 2147483647 09:27:21 policy-pap | retry.backoff.max.ms = 1000 09:27:21 policy-pap | retry.backoff.ms = 100 09:27:21 policy-pap | sasl.client.callback.handler.class = null 09:27:21 policy-pap | sasl.jaas.config = null 09:27:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:27:21 policy-pap | sasl.kerberos.service.name = null 09:27:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:21 policy-pap | sasl.login.callback.handler.class = null 09:27:21 policy-pap | sasl.login.class = null 09:27:21 policy-pap | sasl.login.connect.timeout.ms = null 09:27:21 policy-pap | sasl.login.read.timeout.ms = null 09:27:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:27:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:27:21 policy-pap | sasl.login.refresh.window.factor = 0.8 09:27:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:27:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.login.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.mechanism = GSSAPI 09:27:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:27:21 policy-pap | sasl.oauthbearer.expected.audience = null 09:27:21 policy-pap | sasl.oauthbearer.expected.issuer = null 09:27:21 policy-pap | sasl.oauthbearer.header.urlencode = false 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:27:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:27:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:27:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:27:21 policy-pap | security.protocol = PLAINTEXT 09:27:21 policy-pap | security.providers = null 09:27:21 policy-pap | send.buffer.bytes = 131072 09:27:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:27:21 policy-pap | socket.connection.setup.timeout.ms = 10000 09:27:21 policy-pap | ssl.cipher.suites = null 09:27:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:21 policy-pap | ssl.endpoint.identification.algorithm = https 09:27:21 policy-pap | ssl.engine.factory.class = null 09:27:21 policy-pap | ssl.key.password = null 09:27:21 policy-pap | ssl.keymanager.algorithm = SunX509 09:27:21 policy-pap | ssl.keystore.certificate.chain = null 09:27:21 policy-pap | ssl.keystore.key = null 09:27:21 policy-pap | ssl.keystore.location = null 09:27:21 policy-pap | ssl.keystore.password = null 09:27:21 policy-pap | ssl.keystore.type = JKS 09:27:21 policy-pap | ssl.protocol = TLSv1.3 09:27:21 policy-pap | ssl.provider = null 09:27:21 policy-pap | ssl.secure.random.implementation = null 09:27:21 policy-pap | ssl.trustmanager.algorithm = PKIX 09:27:21 policy-pap | ssl.truststore.certificates = null 09:27:21 policy-pap | ssl.truststore.location = null 09:27:21 policy-pap | ssl.truststore.password = null 09:27:21 policy-pap | ssl.truststore.type = JKS 09:27:21 policy-pap | transaction.timeout.ms = 60000 09:27:21 policy-pap | transactional.id = null 09:27:21 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:53.490+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:27:21 policy-pap | [2025-06-19T09:21:53.503+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 09:27:21 policy-pap | [2025-06-19T09:21:53.521+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:27:21 policy-pap | [2025-06-19T09:21:53.521+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:27:21 policy-pap | [2025-06-19T09:21:53.521+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750324913521 09:27:21 policy-pap | [2025-06-19T09:21:53.521+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=441fdadd-256c-4985-84a3-e8d358f4d25d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:27:21 policy-pap | [2025-06-19T09:21:53.521+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5c7afa40-487d-4f10-b5e8-261ab7cc130d, alive=false, publisher=null]]: starting 09:27:21 policy-pap | [2025-06-19T09:21:53.522+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:27:21 policy-pap | acks = -1 09:27:21 policy-pap | auto.include.jmx.reporter = true 09:27:21 policy-pap | batch.size = 16384 09:27:21 policy-pap | bootstrap.servers = [kafka:9092] 09:27:21 policy-pap | buffer.memory = 33554432 09:27:21 policy-pap | client.dns.lookup = use_all_dns_ips 09:27:21 policy-pap | client.id = producer-2 09:27:21 policy-pap | compression.gzip.level = -1 09:27:21 policy-pap | compression.lz4.level = 9 09:27:21 policy-pap | compression.type = none 09:27:21 policy-pap | compression.zstd.level = 3 09:27:21 policy-pap | connections.max.idle.ms = 540000 09:27:21 policy-pap | delivery.timeout.ms = 120000 09:27:21 policy-pap | enable.idempotence = true 09:27:21 policy-pap | enable.metrics.push = true 09:27:21 policy-pap | interceptor.classes = [] 09:27:21 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:27:21 policy-pap | linger.ms = 0 09:27:21 policy-pap | max.block.ms = 60000 09:27:21 policy-pap | max.in.flight.requests.per.connection = 5 09:27:21 policy-pap | max.request.size = 1048576 09:27:21 policy-pap | metadata.max.age.ms = 300000 09:27:21 policy-pap | metadata.max.idle.ms = 300000 09:27:21 policy-pap | metadata.recovery.strategy = none 09:27:21 policy-pap | metric.reporters = [] 09:27:21 policy-pap | metrics.num.samples = 2 09:27:21 policy-pap | metrics.recording.level = INFO 09:27:21 policy-pap | metrics.sample.window.ms = 30000 09:27:21 policy-pap | partitioner.adaptive.partitioning.enable = true 09:27:21 policy-pap | partitioner.availability.timeout.ms = 0 09:27:21 policy-pap | partitioner.class = null 09:27:21 policy-pap | partitioner.ignore.keys = false 09:27:21 policy-pap | receive.buffer.bytes = 32768 09:27:21 policy-pap | reconnect.backoff.max.ms = 1000 09:27:21 policy-pap | reconnect.backoff.ms = 50 09:27:21 policy-pap | request.timeout.ms = 30000 09:27:21 policy-pap | retries = 2147483647 09:27:21 policy-pap | retry.backoff.max.ms = 1000 09:27:21 policy-pap | retry.backoff.ms = 100 09:27:21 policy-pap | sasl.client.callback.handler.class = null 09:27:21 policy-pap | sasl.jaas.config = null 09:27:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:27:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:27:21 policy-pap | sasl.kerberos.service.name = null 09:27:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:27:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:27:21 policy-pap | sasl.login.callback.handler.class = null 09:27:21 policy-pap | sasl.login.class = null 09:27:21 policy-pap | sasl.login.connect.timeout.ms = null 09:27:21 policy-pap | sasl.login.read.timeout.ms = null 09:27:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:27:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:27:21 policy-pap | sasl.login.refresh.window.factor = 0.8 09:27:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:27:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.login.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.mechanism = GSSAPI 09:27:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:27:21 policy-pap | sasl.oauthbearer.expected.audience = null 09:27:21 policy-pap | sasl.oauthbearer.expected.issuer = null 09:27:21 policy-pap | sasl.oauthbearer.header.urlencode = false 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:27:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:27:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:27:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:27:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:27:21 policy-pap | security.protocol = PLAINTEXT 09:27:21 policy-pap | security.providers = null 09:27:21 policy-pap | send.buffer.bytes = 131072 09:27:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:27:21 policy-pap | socket.connection.setup.timeout.ms = 10000 09:27:21 policy-pap | ssl.cipher.suites = null 09:27:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:27:21 policy-pap | ssl.endpoint.identification.algorithm = https 09:27:21 policy-pap | ssl.engine.factory.class = null 09:27:21 policy-pap | ssl.key.password = null 09:27:21 policy-pap | ssl.keymanager.algorithm = SunX509 09:27:21 policy-pap | ssl.keystore.certificate.chain = null 09:27:21 policy-pap | ssl.keystore.key = null 09:27:21 policy-pap | ssl.keystore.location = null 09:27:21 policy-pap | ssl.keystore.password = null 09:27:21 policy-pap | ssl.keystore.type = JKS 09:27:21 policy-pap | ssl.protocol = TLSv1.3 09:27:21 policy-pap | ssl.provider = null 09:27:21 policy-pap | ssl.secure.random.implementation = null 09:27:21 policy-pap | ssl.trustmanager.algorithm = PKIX 09:27:21 policy-pap | ssl.truststore.certificates = null 09:27:21 policy-pap | ssl.truststore.location = null 09:27:21 policy-pap | ssl.truststore.password = null 09:27:21 policy-pap | ssl.truststore.type = JKS 09:27:21 policy-pap | transaction.timeout.ms = 60000 09:27:21 policy-pap | transactional.id = null 09:27:21 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:27:21 policy-pap | 09:27:21 policy-pap | [2025-06-19T09:21:53.522+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:27:21 policy-pap | [2025-06-19T09:21:53.522+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 09:27:21 policy-pap | [2025-06-19T09:21:53.526+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:27:21 policy-pap | [2025-06-19T09:21:53.526+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:27:21 policy-pap | [2025-06-19T09:21:53.526+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750324913526 09:27:21 policy-pap | [2025-06-19T09:21:53.526+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5c7afa40-487d-4f10-b5e8-261ab7cc130d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:27:21 policy-pap | [2025-06-19T09:21:53.526+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 09:27:21 policy-pap | [2025-06-19T09:21:53.526+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 09:27:21 policy-pap | [2025-06-19T09:21:53.528+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 09:27:21 policy-pap | [2025-06-19T09:21:53.529+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 09:27:21 policy-pap | [2025-06-19T09:21:53.530+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 09:27:21 policy-pap | [2025-06-19T09:21:53.531+00:00|INFO|TimerManager|Thread-9] timer manager update started 09:27:21 policy-pap | [2025-06-19T09:21:53.534+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 09:27:21 policy-pap | [2025-06-19T09:21:53.534+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 09:27:21 policy-pap | [2025-06-19T09:21:53.535+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 09:27:21 policy-pap | [2025-06-19T09:21:53.535+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 09:27:21 policy-pap | [2025-06-19T09:21:53.535+00:00|INFO|ServiceManager|main] Policy PAP started 09:27:21 policy-pap | [2025-06-19T09:21:53.536+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.647 seconds (process running for 11.242) 09:27:21 policy-pap | [2025-06-19T09:21:53.957+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:27:21 policy-pap | [2025-06-19T09:21:53.957+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: qtmK5DWmQ46tn_mNqWtZzg 09:27:21 policy-pap | [2025-06-19T09:21:53.958+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Cluster ID: qtmK5DWmQ46tn_mNqWtZzg 09:27:21 policy-pap | [2025-06-19T09:21:53.958+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qtmK5DWmQ46tn_mNqWtZzg 09:27:21 policy-pap | [2025-06-19T09:21:54.081+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:27:21 policy-pap | [2025-06-19T09:21:54.256+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] The metadata response from the cluster reported a recoverable issue with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:27:21 policy-pap | [2025-06-19T09:21:54.625+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:27:21 policy-pap | [2025-06-19T09:21:54.811+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 09:27:21 policy-pap | [2025-06-19T09:21:54.814+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 09:27:21 policy-pap | [2025-06-19T09:21:54.829+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:27:21 policy-pap | [2025-06-19T09:21:54.829+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: qtmK5DWmQ46tn_mNqWtZzg 09:27:21 policy-pap | [2025-06-19T09:21:54.959+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:27:21 policy-pap | [2025-06-19T09:21:55.144+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:27:21 policy-pap | [2025-06-19T09:21:55.413+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] The metadata response from the cluster reported a recoverable issue with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:27:21 policy-pap | [2025-06-19T09:21:55.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:27:21 policy-pap | [2025-06-19T09:21:55.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:27:21 policy-pap | [2025-06-19T09:21:55.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425 09:27:21 policy-pap | [2025-06-19T09:21:55.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:27:21 policy-pap | [2025-06-19T09:21:56.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:27:21 policy-pap | [2025-06-19T09:21:56.422+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] (Re-)joining group 09:27:21 policy-pap | [2025-06-19T09:21:56.426+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Request joining group due to: need to re-join with the given member-id: consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce 09:27:21 policy-pap | [2025-06-19T09:21:56.427+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] (Re-)joining group 09:27:21 policy-pap | [2025-06-19T09:21:58.707+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425', protocol='range'} 09:27:21 policy-pap | [2025-06-19T09:21:58.719+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425=Assignment(partitions=[policy-pdp-pap-0])} 09:27:21 policy-pap | [2025-06-19T09:21:58.772+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-c9c1617b-8778-482c-9701-379617313425', protocol='range'} 09:27:21 policy-pap | [2025-06-19T09:21:58.773+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:27:21 policy-pap | [2025-06-19T09:21:58.775+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 09:27:21 policy-pap | [2025-06-19T09:21:58.792+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 09:27:21 policy-pap | [2025-06-19T09:21:58.827+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:27:21 policy-pap | [2025-06-19T09:21:59.432+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce', protocol='range'} 09:27:21 policy-pap | [2025-06-19T09:21:59.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Finished assignment for group at generation 1: {consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce=Assignment(partitions=[policy-pdp-pap-0])} 09:27:21 policy-pap | [2025-06-19T09:21:59.440+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3-51425536-985d-4a92-9ee0-85d89d098cce', protocol='range'} 09:27:21 policy-pap | [2025-06-19T09:21:59.440+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:27:21 policy-pap | [2025-06-19T09:21:59.441+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Adding newly assigned partitions: policy-pdp-pap-0 09:27:21 policy-pap | [2025-06-19T09:21:59.443+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Found no committed offset for partition policy-pdp-pap-0 09:27:21 policy-pap | [2025-06-19T09:21:59.445+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c6a8bb97-2c08-4522-a637-4a0267c3b861-3, groupId=c6a8bb97-2c08-4522-a637-4a0267c3b861] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:27:21 policy-pap | [2025-06-19T09:22:41.615+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:27:21 policy-pap | [2025-06-19T09:22:41.615+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 09:27:21 policy-pap | [2025-06-19T09:22:41.616+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 09:27:21 policy-pap | [2025-06-19T09:23:49.135+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 09:27:21 policy-pap | [] 09:27:21 policy-pap | [2025-06-19T09:23:49.136+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"03f32307-1284-4774-b130-da249ecd5247","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750325029090","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:49.136+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"03f32307-1284-4774-b130-da249ecd5247","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750325029090","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:49.141+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:27:21 policy-pap | [2025-06-19T09:23:49.676+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:23:49.676+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:23:49.676+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:23:49.677+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=659e2a07-1e2b-4f69-8e42-e74eba4472f6, expireMs=1750325059677] 09:27:21 policy-pap | [2025-06-19T09:23:49.678+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:23:49.678+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:23:49.678+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=659e2a07-1e2b-4f69-8e42-e74eba4472f6, expireMs=1750325059677] 09:27:21 policy-pap | [2025-06-19T09:23:49.685+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","timestampMs":1750325029654,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:49.737+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","timestampMs":1750325029654,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:49.737+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:23:49.739+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","timestampMs":1750325029654,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:49.740+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:23:49.777+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bab848e5-df3c-4f23-b3bc-bca0a9d81cf1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029762","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:49.778+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 659e2a07-1e2b-4f69-8e42-e74eba4472f6 09:27:21 policy-pap | [2025-06-19T09:23:49.779+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"659e2a07-1e2b-4f69-8e42-e74eba4472f6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bab848e5-df3c-4f23-b3bc-bca0a9d81cf1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029762","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:49.780+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:23:49.781+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:23:49.781+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:23:49.781+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=659e2a07-1e2b-4f69-8e42-e74eba4472f6, expireMs=1750325059677] 09:27:21 policy-pap | [2025-06-19T09:23:49.781+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:23:49.782+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:23:49.798+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:23:49.798+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e start publishing next request 09:27:21 policy-pap | [2025-06-19T09:23:49.798+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange starting 09:27:21 policy-pap | [2025-06-19T09:23:49.798+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange starting listener 09:27:21 policy-pap | [2025-06-19T09:23:49.798+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange starting timer 09:27:21 policy-pap | [2025-06-19T09:23:49.798+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a6922d7d-589c-46cd-9caa-cee843a203b2, expireMs=1750325059798] 09:27:21 policy-pap | [2025-06-19T09:23:49.799+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange starting enqueue 09:27:21 policy-pap | [2025-06-19T09:23:49.799+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=a6922d7d-589c-46cd-9caa-cee843a203b2, expireMs=1750325059798] 09:27:21 policy-pap | [2025-06-19T09:23:49.799+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange started 09:27:21 policy-pap | [2025-06-19T09:23:49.800+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:27:21 policy-pap | [2025-06-19T09:23:49.800+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a6922d7d-589c-46cd-9caa-cee843a203b2","timestampMs":1750325029655,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:49.819+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a6922d7d-589c-46cd-9caa-cee843a203b2","timestampMs":1750325029655,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:49.822+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 09:27:21 policy-pap | [2025-06-19T09:23:49.825+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a6922d7d-589c-46cd-9caa-cee843a203b2","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"b7178b8b-01a6-49dd-ae5b-921dfaf66037","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029813","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:49.827+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a6922d7d-589c-46cd-9caa-cee843a203b2 09:27:21 policy-pap | [2025-06-19T09:23:49.827+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 09:27:21 policy-pap | [2025-06-19T09:23:50.147+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a6922d7d-589c-46cd-9caa-cee843a203b2","timestampMs":1750325029655,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:50.147+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a6922d7d-589c-46cd-9caa-cee843a203b2","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"b7178b8b-01a6-49dd-ae5b-921dfaf66037","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325029813","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange stopping 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange stopping timer 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a6922d7d-589c-46cd-9caa-cee843a203b2, expireMs=1750325059798] 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange stopping listener 09:27:21 policy-pap | [2025-06-19T09:23:50.150+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange stopped 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpStateChange successful 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e start publishing next request 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=8b173699-25bf-4249-b02b-519f85c7ac42, expireMs=1750325060151] 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b173699-25bf-4249-b02b-519f85c7ac42","timestampMs":1750325030140,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:50.151+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:23:50.159+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b173699-25bf-4249-b02b-519f85c7ac42","timestampMs":1750325030140,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:50.159+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:23:50.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8b173699-25bf-4249-b02b-519f85c7ac42","timestampMs":1750325030140,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:23:50.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:23:50.171+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8b173699-25bf-4249-b02b-519f85c7ac42","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bd283154-4046-4a6a-bbcb-ff7759e8bf7a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325030157","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:50.171+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8b173699-25bf-4249-b02b-519f85c7ac42","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"bd283154-4046-4a6a-bbcb-ff7759e8bf7a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325030157","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8b173699-25bf-4249-b02b-519f85c7ac42 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8b173699-25bf-4249-b02b-519f85c7ac42, expireMs=1750325060151] 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:23:50.172+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:23:50.178+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:23:50.179+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:23:53.536+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 09:27:21 policy-pap | [2025-06-19T09:24:19.678+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=659e2a07-1e2b-4f69-8e42-e74eba4472f6, expireMs=1750325059677] 09:27:21 policy-pap | [2025-06-19T09:24:19.798+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a6922d7d-589c-46cd-9caa-cee843a203b2, expireMs=1750325059798] 09:27:21 policy-pap | [2025-06-19T09:24:49.100+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4cf58d2a-0c0c-4b8b-969d-a2a7c9c825a4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325089085","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:24:49.101+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:27:21 policy-pap | [2025-06-19T09:24:49.103+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4cf58d2a-0c0c-4b8b-969d-a2a7c9c825a4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325089085","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:01.045+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:01.047+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-10] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 09:27:21 policy-pap | [2025-06-19T09:25:01.048+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy zoneB 1.0.6 09:27:21 policy-pap | [2025-06-19T09:25:01.048+00:00|INFO|SessionData|http-nio-6969-exec-10] add update opa-6446e8da-32c3-48c2-9df7-d65664d9050e opaGroup opa policies=1 09:27:21 policy-pap | [2025-06-19T09:25:01.049+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:01.050+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:01.074+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-19T09:25:01Z, user=policyadmin)] 09:27:21 policy-pap | [2025-06-19T09:25:01.120+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:25:01.120+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:25:01.120+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:25:01.120+00:00|INFO|TimerManager|http-nio-6969-exec-10] update timer registered Timer [name=c212e2a9-33e3-4aa7-bd48-d52ca83f261d, expireMs=1750325131120] 09:27:21 policy-pap | [2025-06-19T09:25:01.120+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:25:01.120+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:25:01.121+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c212e2a9-33e3-4aa7-bd48-d52ca83f261d, expireMs=1750325131120] 09:27:21 policy-pap | [2025-06-19T09:25:01.121+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","timestampMs":1750325101048,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:01.132+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","timestampMs":1750325101048,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:01.132+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","timestampMs":1750325101048,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:01.133+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:01.133+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:01.166+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"13b88216-5f76-48aa-8143-852516149600","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325101155","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:01.167+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c212e2a9-33e3-4aa7-bd48-d52ca83f261d 09:27:21 policy-pap | [2025-06-19T09:25:01.167+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c212e2a9-33e3-4aa7-bd48-d52ca83f261d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"13b88216-5f76-48aa-8143-852516149600","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325101155","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:01.169+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:25:01.169+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:25:01.169+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:25:01.169+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c212e2a9-33e3-4aa7-bd48-d52ca83f261d, expireMs=1750325131120] 09:27:21 policy-pap | [2025-06-19T09:25:01.169+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:25:01.169+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:25:01.195+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:25:01.195+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:25:01.197+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:27:21 policy-pap | [2025-06-19T09:25:25.654+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:25.655+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 09:27:21 policy-pap | [2025-06-19T09:25:25.656+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 09:27:21 policy-pap | [2025-06-19T09:25:25.656+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-6446e8da-32c3-48c2-9df7-d65664d9050e opaGroup opa policies=0 09:27:21 policy-pap | [2025-06-19T09:25:25.656+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:25.656+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:25.667+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-19T09:25:25Z, user=policyadmin)] 09:27:21 policy-pap | [2025-06-19T09:25:25.682+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:25:25.682+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:25:25.682+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:25:25.682+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2, expireMs=1750325155682] 09:27:21 policy-pap | [2025-06-19T09:25:25.682+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:25:25.682+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:25:25.683+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","timestampMs":1750325125656,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:25.695+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","timestampMs":1750325125656,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:25.695+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:25.698+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","timestampMs":1750325125656,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:25.698+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:25.706+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"523ad858-af5e-4b7e-876b-08d85af2d0f1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325125693","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:25.706+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2 09:27:21 policy-pap | [2025-06-19T09:25:25.712+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"523ad858-af5e-4b7e-876b-08d85af2d0f1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325125693","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:25.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:25:25.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:25:25.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:25:25.713+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=03bad8c5-d5a9-4425-bf0e-a2ac4f8f48d2, expireMs=1750325155682] 09:27:21 policy-pap | [2025-06-19T09:25:25.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:25:25.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:25:25.726+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 09:27:21 policy-pap | [2025-06-19T09:25:25.727+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:25:25.727+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:25:26.121+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:26.124+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: zoneB null 09:27:21 policy-pap | [2025-06-19T09:25:26.124+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed 09:27:21 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:27:21 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:27:21 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 09:27:21 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 09:27:21 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 09:27:21 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 09:27:21 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 09:27:21 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 09:27:21 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 09:27:21 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 09:27:21 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 09:27:21 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 09:27:21 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:27:21 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 09:27:21 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 09:27:21 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 09:27:21 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 09:27:21 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 09:27:21 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 09:27:21 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 09:27:21 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 09:27:21 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 09:27:21 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 09:27:21 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 09:27:21 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 09:27:21 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 09:27:21 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 09:27:21 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 09:27:21 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 09:27:21 policy-pap | [2025-06-19T09:25:26.920+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:26.920+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-7] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 09:27:21 policy-pap | [2025-06-19T09:25:26.920+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering a deploy for policy vehicle 1.0.6 09:27:21 policy-pap | [2025-06-19T09:25:26.920+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-6446e8da-32c3-48c2-9df7-d65664d9050e opaGroup opa policies=1 09:27:21 policy-pap | [2025-06-19T09:25:26.920+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:26.920+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:26.928+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-19T09:25:26Z, user=policyadmin)] 09:27:21 policy-pap | [2025-06-19T09:25:26.936+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:25:26.937+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:25:26.937+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:25:26.937+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=f035de1b-a3a2-4fb7-882e-721e5e1c050d, expireMs=1750325156937] 09:27:21 policy-pap | [2025-06-19T09:25:26.937+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:25:26.937+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:25:26.938+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","timestampMs":1750325126920,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:26.945+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","timestampMs":1750325126920,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:26.946+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:26.946+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","timestampMs":1750325126920,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:26.947+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:26.976+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"096b1807-1a23-4459-87bf-e6cffafee28e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325126965","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:26.977+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f035de1b-a3a2-4fb7-882e-721e5e1c050d 09:27:21 policy-pap | [2025-06-19T09:25:26.977+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f035de1b-a3a2-4fb7-882e-721e5e1c050d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"096b1807-1a23-4459-87bf-e6cffafee28e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325126965","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:26.977+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:25:26.977+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:25:26.977+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:25:26.978+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f035de1b-a3a2-4fb7-882e-721e5e1c050d, expireMs=1750325156937] 09:27:21 policy-pap | [2025-06-19T09:25:26.978+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:25:26.978+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:25:26.991+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:25:26.991+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:25:26.991+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:27:21 policy-pap | [2025-06-19T09:25:31.120+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c212e2a9-33e3-4aa7-bd48-d52ca83f261d, expireMs=1750325131120] 09:27:21 policy-pap | [2025-06-19T09:25:49.789+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cd490f88-9ebe-44e4-8a7d-e586e40c3d22","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325149778","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:49.789+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cd490f88-9ebe-44e4-8a7d-e586e40c3d22","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325149778","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:49.790+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:27:21 policy-pap | [2025-06-19T09:25:51.379+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:51.379+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 09:27:21 policy-pap | [2025-06-19T09:25:51.379+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 09:27:21 policy-pap | [2025-06-19T09:25:51.380+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-6446e8da-32c3-48c2-9df7-d65664d9050e opaGroup opa policies=0 09:27:21 policy-pap | [2025-06-19T09:25:51.380+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:51.380+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:51.399+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-19T09:25:51Z, user=policyadmin)] 09:27:21 policy-pap | [2025-06-19T09:25:51.414+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:25:51.414+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:25:51.415+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:25:51.415+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=9c9107b1-c799-4c7b-9973-58924a565674, expireMs=1750325181415] 09:27:21 policy-pap | [2025-06-19T09:25:51.415+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=9c9107b1-c799-4c7b-9973-58924a565674, expireMs=1750325181415] 09:27:21 policy-pap | [2025-06-19T09:25:51.415+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:25:51.415+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:25:51.416+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"9c9107b1-c799-4c7b-9973-58924a565674","timestampMs":1750325151380,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:51.424+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"9c9107b1-c799-4c7b-9973-58924a565674","timestampMs":1750325151380,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:51.425+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:51.431+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"9c9107b1-c799-4c7b-9973-58924a565674","timestampMs":1750325151380,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:51.432+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:51.436+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9c9107b1-c799-4c7b-9973-58924a565674","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4f96631e-168c-42a3-be34-01f25a59055b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325151425","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:51.437+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9c9107b1-c799-4c7b-9973-58924a565674 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9c9107b1-c799-4c7b-9973-58924a565674","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"4f96631e-168c-42a3-be34-01f25a59055b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325151425","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9c9107b1-c799-4c7b-9973-58924a565674, expireMs=1750325181415] 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:25:51.438+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:25:51.446+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:25:51.447+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:25:51.447+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 09:27:21 policy-pap | [2025-06-19T09:25:51.792+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:51.792+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-3] failed to undeploy policy: vehicle null 09:27:21 policy-pap | [2025-06-19T09:25:51.792+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-3] undeploy policy failed 09:27:21 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:27:21 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:27:21 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 09:27:21 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 09:27:21 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 09:27:21 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 09:27:21 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 09:27:21 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 09:27:21 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 09:27:21 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 09:27:21 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 09:27:21 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 09:27:21 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:27:21 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 09:27:21 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 09:27:21 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 09:27:21 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 09:27:21 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 09:27:21 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 09:27:21 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 09:27:21 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 09:27:21 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 09:27:21 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 09:27:21 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 09:27:21 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 09:27:21 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 09:27:21 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 09:27:21 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 09:27:21 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 09:27:21 policy-pap | [2025-06-19T09:25:52.522+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:52.522+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 09:27:21 policy-pap | [2025-06-19T09:25:52.522+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 09:27:21 policy-pap | [2025-06-19T09:25:52.522+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-6446e8da-32c3-48c2-9df7-d65664d9050e opaGroup opa policies=1 09:27:21 policy-pap | [2025-06-19T09:25:52.522+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:52.522+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup 09:27:21 policy-pap | [2025-06-19T09:25:52.529+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-19T09:25:52Z, user=policyadmin)] 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=db8a1006-0422-4cb8-9506-bb64c1a65fa0, expireMs=1750325182538] 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:25:52.538+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","timestampMs":1750325152522,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:52.546+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","timestampMs":1750325152522,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:52.546+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:52.548+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","timestampMs":1750325152522,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:25:52.548+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:25:52.586+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"354bbffa-4b61-43cc-ab43-09996db2b0f3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325152573","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:52.586+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id db8a1006-0422-4cb8-9506-bb64c1a65fa0 09:27:21 policy-pap | [2025-06-19T09:25:52.587+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db8a1006-0422-4cb8-9506-bb64c1a65fa0","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"354bbffa-4b61-43cc-ab43-09996db2b0f3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325152573","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:25:52.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:25:52.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:25:52.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:25:52.588+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=db8a1006-0422-4cb8-9506-bb64c1a65fa0, expireMs=1750325182538] 09:27:21 policy-pap | [2025-06-19T09:25:52.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:25:52.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:25:52.602+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:25:52.602+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:25:52.602+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:27:21 policy-pap | [2025-06-19T09:25:53.547+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 09:27:21 policy-pap | [2025-06-19T09:26:17.236+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:26:17.236+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 09:27:21 policy-pap | [2025-06-19T09:26:17.236+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy abac 1.0.7 09:27:21 policy-pap | [2025-06-19T09:26:17.236+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-6446e8da-32c3-48c2-9df7-d65664d9050e opaGroup opa policies=0 09:27:21 policy-pap | [2025-06-19T09:26:17.236+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup 09:27:21 policy-pap | [2025-06-19T09:26:17.236+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup 09:27:21 policy-pap | [2025-06-19T09:26:17.254+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-19T09:26:17Z, user=policyadmin)] 09:27:21 policy-pap | [2025-06-19T09:26:17.268+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting 09:27:21 policy-pap | [2025-06-19T09:26:17.268+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting listener 09:27:21 policy-pap | [2025-06-19T09:26:17.268+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting timer 09:27:21 policy-pap | [2025-06-19T09:26:17.268+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=8652fa24-c749-4491-9f90-13545ab0a216, expireMs=1750325207268] 09:27:21 policy-pap | [2025-06-19T09:26:17.268+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate starting enqueue 09:27:21 policy-pap | [2025-06-19T09:26:17.268+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate started 09:27:21 policy-pap | [2025-06-19T09:26:17.269+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8652fa24-c749-4491-9f90-13545ab0a216","timestampMs":1750325177236,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:26:17.284+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8652fa24-c749-4491-9f90-13545ab0a216","timestampMs":1750325177236,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:26:17.284+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:26:17.285+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"source":"pap-6f1d1af8-1c2a-4d8a-8de4-bf40559a9b3f","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8652fa24-c749-4491-9f90-13545ab0a216","timestampMs":1750325177236,"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:27:21 policy-pap | [2025-06-19T09:26:17.286+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:27:21 policy-pap | [2025-06-19T09:26:17.295+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8652fa24-c749-4491-9f90-13545ab0a216","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cbe4315c-17a7-4235-be3e-5590dfdb0738","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325177285","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:26:17.295+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8652fa24-c749-4491-9f90-13545ab0a216 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:27:21 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8652fa24-c749-4491-9f90-13545ab0a216","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-6446e8da-32c3-48c2-9df7-d65664d9050e","requestId":"cbe4315c-17a7-4235-be3e-5590dfdb0738","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750325177285","deploymentInstanceInfo":""} 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping enqueue 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping timer 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8652fa24-c749-4491-9f90-13545ab0a216, expireMs=1750325207268] 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopping listener 09:27:21 policy-pap | [2025-06-19T09:26:17.296+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate stopped 09:27:21 policy-pap | [2025-06-19T09:26:17.308+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e PdpUpdate successful 09:27:21 policy-pap | [2025-06-19T09:26:17.308+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-6446e8da-32c3-48c2-9df7-d65664d9050e has no more requests 09:27:21 policy-pap | [2025-06-19T09:26:17.308+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:27:21 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} 09:27:21 policy-pap | [2025-06-19T09:26:17.678+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 09:27:21 policy-pap | [2025-06-19T09:26:17.678+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: abac null 09:27:21 policy-pap | [2025-06-19T09:26:17.678+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed 09:27:21 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:27:21 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:27:21 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:27:21 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:27:21 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:27:21 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:27:21 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:27:21 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:27:21 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 09:27:21 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 09:27:21 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 09:27:21 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 09:27:21 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 09:27:21 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 09:27:21 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 09:27:21 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 09:27:21 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 09:27:21 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 09:27:21 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 09:27:21 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 09:27:21 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 09:27:21 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 09:27:21 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:27:21 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:27:21 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 09:27:21 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 09:27:21 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 09:27:21 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 09:27:21 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:27:21 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:27:21 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 09:27:21 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 09:27:21 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 09:27:21 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 09:27:21 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 09:27:21 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 09:27:21 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 09:27:21 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 09:27:21 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 09:27:21 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 09:27:21 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 09:27:21 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 09:27:21 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 09:27:21 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 09:27:21 policy-pap | [2025-06-19T09:26:21.415+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=9c9107b1-c799-4c7b-9973-58924a565674, expireMs=1750325181415] 09:27:21 postgres | The files belonging to this database system will be owned by user "postgres". 09:27:21 postgres | This user must also own the server process. 09:27:21 postgres | 09:27:21 postgres | The database cluster will be initialized with locale "en_US.utf8". 09:27:21 postgres | The default database encoding has accordingly been set to "UTF8". 09:27:21 postgres | The default text search configuration will be set to "english". 09:27:21 postgres | 09:27:21 postgres | Data page checksums are disabled. 09:27:21 postgres | 09:27:21 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 09:27:21 postgres | creating subdirectories ... ok 09:27:21 postgres | selecting dynamic shared memory implementation ... posix 09:27:21 postgres | selecting default max_connections ... 100 09:27:21 postgres | selecting default shared_buffers ... 128MB 09:27:21 postgres | selecting default time zone ... Etc/UTC 09:27:21 postgres | creating configuration files ... ok 09:27:21 postgres | running bootstrap script ... ok 09:27:21 postgres | performing post-bootstrap initialization ... ok 09:27:21 postgres | syncing data to disk ... ok 09:27:21 postgres | 09:27:21 postgres | 09:27:21 postgres | Success. You can now start the database server using: 09:27:21 postgres | 09:27:21 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 09:27:21 postgres | 09:27:21 postgres | initdb: warning: enabling "trust" authentication for local connections 09:27:21 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 09:27:21 postgres | waiting for server to start....2025-06-19 09:20:54.283 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 09:27:21 postgres | 2025-06-19 09:20:54.285 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 09:27:21 postgres | 2025-06-19 09:20:54.293 UTC [51] LOG: database system was shut down at 2025-06-19 09:20:53 UTC 09:27:21 postgres | 2025-06-19 09:20:54.299 UTC [48] LOG: database system is ready to accept connections 09:27:21 postgres | done 09:27:21 postgres | server started 09:27:21 postgres | 09:27:21 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 09:27:21 postgres | 09:27:21 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 09:27:21 postgres | #!/bin/bash -xv 09:27:21 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 09:27:21 postgres | # 09:27:21 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 09:27:21 postgres | # you may not use this file except in compliance with the License. 09:27:21 postgres | # You may obtain a copy of the License at 09:27:21 postgres | # 09:27:21 postgres | # http://www.apache.org/licenses/LICENSE-2.0 09:27:21 postgres | # 09:27:21 postgres | # Unless required by applicable law or agreed to in writing, software 09:27:21 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 09:27:21 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 09:27:21 postgres | # See the License for the specific language governing permissions and 09:27:21 postgres | # limitations under the License. 09:27:21 postgres | 09:27:21 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 09:27:21 postgres | CREATE ROLE 09:27:21 postgres | 09:27:21 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | do 09:27:21 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 09:27:21 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 09:27:21 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 09:27:21 postgres | done 09:27:21 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 09:27:21 postgres | CREATE DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 09:27:21 postgres | ALTER DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 09:27:21 postgres | GRANT 09:27:21 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 09:27:21 postgres | CREATE DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 09:27:21 postgres | ALTER DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 09:27:21 postgres | GRANT 09:27:21 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 09:27:21 postgres | CREATE DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 09:27:21 postgres | ALTER DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 09:27:21 postgres | GRANT 09:27:21 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 09:27:21 postgres | CREATE DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 09:27:21 postgres | ALTER DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 09:27:21 postgres | GRANT 09:27:21 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 09:27:21 postgres | CREATE DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 09:27:21 postgres | ALTER DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 09:27:21 postgres | GRANT 09:27:21 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:27:21 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 09:27:21 postgres | CREATE DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 09:27:21 postgres | ALTER DATABASE 09:27:21 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 09:27:21 postgres | GRANT 09:27:21 postgres | 09:27:21 postgres | waiting for server to shut down....2025-06-19 09:20:55.453 UTC [48] LOG: received fast shutdown request 09:27:21 postgres | 2025-06-19 09:20:55.455 UTC [48] LOG: aborting any active transactions 09:27:21 postgres | 2025-06-19 09:20:55.458 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 09:27:21 postgres | 2025-06-19 09:20:55.460 UTC [49] LOG: shutting down 09:27:21 postgres | 2025-06-19 09:20:55.461 UTC [49] LOG: checkpoint starting: shutdown immediate 09:27:21 postgres | 2025-06-19 09:20:55.989 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.372 s, sync=0.146 s, total=0.530 s; sync files=1788, longest=0.019 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 09:27:21 postgres | 2025-06-19 09:20:56.001 UTC [48] LOG: database system is shut down 09:27:21 postgres | done 09:27:21 postgres | server stopped 09:27:21 postgres | 09:27:21 postgres | PostgreSQL init process complete; ready for start up. 09:27:21 postgres | 09:27:21 postgres | 2025-06-19 09:20:56.083 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 09:27:21 postgres | 2025-06-19 09:20:56.084 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 09:27:21 postgres | 2025-06-19 09:20:56.084 UTC [1] LOG: listening on IPv6 address "::", port 5432 09:27:21 postgres | 2025-06-19 09:20:56.091 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 09:27:21 postgres | 2025-06-19 09:20:56.100 UTC [101] LOG: database system was shut down at 2025-06-19 09:20:55 UTC 09:27:21 postgres | 2025-06-19 09:20:56.105 UTC [1] LOG: database system is ready to accept connections 09:27:21 postgres | 2025-06-19 09:25:56.132 UTC [99] LOG: checkpoint starting: time 09:27:21 postgres | 2025-06-19 09:27:01.565 UTC [99] LOG: checkpoint complete: wrote 655 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=65.400 s, sync=0.023 s, total=65.434 s; sync files=519, longest=0.002 s, average=0.001 s; distance=3563 kB, estimate=3563 kB; lsn=0/3157520, redo lsn=0/3155000 09:27:21 prometheus | time=2025-06-19T09:20:52.316Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 09:27:21 prometheus | time=2025-06-19T09:20:52.316Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 09:27:21 prometheus | time=2025-06-19T09:20:52.316Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 09:27:21 prometheus | time=2025-06-19T09:20:52.317Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 09:27:21 prometheus | time=2025-06-19T09:20:52.319Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 09:27:21 prometheus | time=2025-06-19T09:20:52.320Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 09:27:21 prometheus | time=2025-06-19T09:20:52.322Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 09:27:21 prometheus | time=2025-06-19T09:20:52.322Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 09:27:21 prometheus | time=2025-06-19T09:20:52.331Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 09:27:21 prometheus | time=2025-06-19T09:20:52.331Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.5µs 09:27:21 prometheus | time=2025-06-19T09:20:52.331Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 09:27:21 prometheus | time=2025-06-19T09:20:52.332Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=255.162µs 09:27:21 prometheus | time=2025-06-19T09:20:52.332Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=30.651µs wal_replay_duration=274.102µs wbl_replay_duration=180ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.5µs total_replay_duration=356.423µs 09:27:21 prometheus | time=2025-06-19T09:20:52.333Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 09:27:21 prometheus | time=2025-06-19T09:20:52.333Z level=INFO source=main.go:1290 msg="TSDB started" 09:27:21 prometheus | time=2025-06-19T09:20:52.333Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 09:27:21 prometheus | time=2025-06-19T09:20:52.335Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 09:27:21 prometheus | time=2025-06-19T09:20:52.335Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.44µs remote_storage=2.15µs web_handler=830ns query_engine=2.06µs scrape=295.743µs scrape_sd=368.543µs notify=234.562µs notify_sd=28.6µs rules=2.25µs tracing=22.16µs filename=/etc/prometheus/prometheus.yml totalDuration=1.565575ms 09:27:21 prometheus | time=2025-06-19T09:20:52.335Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 09:27:21 prometheus | time=2025-06-19T09:20:52.335Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 09:27:21 zookeeper | ===> User 09:27:21 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:27:21 zookeeper | ===> Configuring ... 09:27:21 zookeeper | ===> Running preflight checks ... 09:27:21 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 09:27:21 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 09:27:21 zookeeper | ===> Launching ... 09:27:21 zookeeper | ===> Launching zookeeper ... 09:27:21 zookeeper | [2025-06-19 09:20:54,479] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,481] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,481] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,481] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,481] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,483] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 09:27:21 zookeeper | [2025-06-19 09:20:54,483] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 09:27:21 zookeeper | [2025-06-19 09:20:54,483] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 09:27:21 zookeeper | [2025-06-19 09:20:54,483] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 09:27:21 zookeeper | [2025-06-19 09:20:54,484] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 09:27:21 zookeeper | [2025-06-19 09:20:54,484] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,484] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,485] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,485] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,485] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:27:21 zookeeper | [2025-06-19 09:20:54,485] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 09:27:21 zookeeper | [2025-06-19 09:20:54,495] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 09:27:21 zookeeper | [2025-06-19 09:20:54,497] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:27:21 zookeeper | [2025-06-19 09:20:54,497] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:27:21 zookeeper | [2025-06-19 09:20:54,499] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,507] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,508] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,509] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,510] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,510] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,510] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,510] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,510] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 09:27:21 zookeeper | [2025-06-19 09:20:54,511] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,511] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,512] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:27:21 zookeeper | [2025-06-19 09:20:54,512] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:27:21 zookeeper | [2025-06-19 09:20:54,513] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:27:21 zookeeper | [2025-06-19 09:20:54,513] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:27:21 zookeeper | [2025-06-19 09:20:54,513] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:27:21 zookeeper | [2025-06-19 09:20:54,513] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:27:21 zookeeper | [2025-06-19 09:20:54,513] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:27:21 zookeeper | [2025-06-19 09:20:54,513] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:27:21 zookeeper | [2025-06-19 09:20:54,515] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,515] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,515] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 09:27:21 zookeeper | [2025-06-19 09:20:54,516] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 09:27:21 zookeeper | [2025-06-19 09:20:54,516] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,537] INFO Logging initialized @438ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 09:27:21 zookeeper | [2025-06-19 09:20:54,591] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 09:27:21 zookeeper | [2025-06-19 09:20:54,591] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 09:27:21 zookeeper | [2025-06-19 09:20:54,605] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 09:27:21 zookeeper | [2025-06-19 09:20:54,640] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 09:27:21 zookeeper | [2025-06-19 09:20:54,640] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 09:27:21 zookeeper | [2025-06-19 09:20:54,641] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 09:27:21 zookeeper | [2025-06-19 09:20:54,644] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 09:27:21 zookeeper | [2025-06-19 09:20:54,653] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 09:27:21 zookeeper | [2025-06-19 09:20:54,662] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 09:27:21 zookeeper | [2025-06-19 09:20:54,662] INFO Started @568ms (org.eclipse.jetty.server.Server) 09:27:21 zookeeper | [2025-06-19 09:20:54,662] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,665] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 09:27:21 zookeeper | [2025-06-19 09:20:54,666] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 09:27:21 zookeeper | [2025-06-19 09:20:54,667] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:27:21 zookeeper | [2025-06-19 09:20:54,668] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:27:21 zookeeper | [2025-06-19 09:20:54,677] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:27:21 zookeeper | [2025-06-19 09:20:54,677] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:27:21 zookeeper | [2025-06-19 09:20:54,677] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 09:27:21 zookeeper | [2025-06-19 09:20:54,677] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 09:27:21 zookeeper | [2025-06-19 09:20:54,681] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 09:27:21 zookeeper | [2025-06-19 09:20:54,681] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:27:21 zookeeper | [2025-06-19 09:20:54,684] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 09:27:21 zookeeper | [2025-06-19 09:20:54,684] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:27:21 zookeeper | [2025-06-19 09:20:54,685] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:27:21 zookeeper | [2025-06-19 09:20:54,691] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 09:27:21 zookeeper | [2025-06-19 09:20:54,691] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 09:27:21 zookeeper | [2025-06-19 09:20:54,703] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 09:27:21 zookeeper | [2025-06-19 09:20:54,704] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 09:27:21 zookeeper | [2025-06-19 09:20:55,732] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 09:27:21 zookeeper | [2025-06-19 09:21:21,919] INFO Unable to read additional data from client, it probably closed the socket: address = /172.17.0.5:59774, session = 0x10000022abf0001 (org.apache.zookeeper.server.NIOServerCnxn) 09:27:21 Tearing down containers... 09:27:21 Container grafana Stopping 09:27:21 Container policy-csit Stopping 09:27:21 Container policy-opa-pdp Stopping 09:27:21 Container policy-csit Stopped 09:27:21 Container policy-csit Removing 09:27:21 Container policy-csit Removed 09:27:22 Container grafana Stopped 09:27:22 Container grafana Removing 09:27:22 Container grafana Removed 09:27:22 Container prometheus Stopping 09:27:22 Container prometheus Stopped 09:27:22 Container prometheus Removing 09:27:22 Container prometheus Removed 09:27:32 Container policy-opa-pdp Stopped 09:27:32 Container policy-opa-pdp Removing 09:27:32 Container policy-opa-pdp Removed 09:27:32 Container policy-pap Stopping 09:27:42 Container policy-pap Stopped 09:27:42 Container policy-pap Removing 09:27:42 Container policy-pap Removed 09:27:42 Container kafka Stopping 09:27:42 Container policy-api Stopping 09:27:43 Container kafka Stopped 09:27:43 Container kafka Removing 09:27:43 Container kafka Removed 09:27:43 Container zookeeper Stopping 09:27:44 Container zookeeper Stopped 09:27:44 Container zookeeper Removing 09:27:44 Container zookeeper Removed 09:27:52 Container policy-api Stopped 09:27:52 Container policy-api Removing 09:27:52 Container policy-api Removed 09:27:52 Container policy-db-migrator Stopping 09:27:52 Container policy-db-migrator Stopped 09:27:52 Container policy-db-migrator Removing 09:27:52 Container policy-db-migrator Removed 09:27:52 Container postgres Stopping 09:27:53 Container postgres Stopped 09:27:53 Container postgres Removing 09:27:53 Container postgres Removed 09:27:53 Network compose_default Removing 09:27:53 Network compose_default Removed 09:27:53 $ ssh-agent -k 09:27:53 unset SSH_AUTH_SOCK; 09:27:53 unset SSH_AGENT_PID; 09:27:53 echo Agent pid 2054 killed; 09:27:53 [ssh-agent] Stopped. 09:27:53 Robot results publisher started... 09:27:53 INFO: Checking test criticality is deprecated and will be dropped in a future release! 09:27:53 -Parsing output xml: 09:27:53 Done! 09:27:53 -Copying log files to build dir: 09:27:53 Done! 09:27:53 -Assigning results to build: 09:27:53 Done! 09:27:53 -Checking thresholds: 09:27:53 Done! 09:27:53 Done publishing Robot results. 09:27:53 [PostBuildScript] - [INFO] Executing post build scripts. 09:27:53 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins10862175128905178063.sh 09:27:53 ---> sysstat.sh 09:27:54 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins12563989664251422771.sh 09:27:54 ---> package-listing.sh 09:27:54 ++ facter osfamily 09:27:54 ++ tr '[:upper:]' '[:lower:]' 09:27:54 + OS_FAMILY=debian 09:27:54 + workspace=/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 09:27:54 + START_PACKAGES=/tmp/packages_start.txt 09:27:54 + END_PACKAGES=/tmp/packages_end.txt 09:27:54 + DIFF_PACKAGES=/tmp/packages_diff.txt 09:27:54 + PACKAGES=/tmp/packages_start.txt 09:27:54 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 09:27:54 + PACKAGES=/tmp/packages_end.txt 09:27:54 + case "${OS_FAMILY}" in 09:27:54 + dpkg -l 09:27:54 + grep '^ii' 09:27:54 + '[' -f /tmp/packages_start.txt ']' 09:27:54 + '[' -f /tmp/packages_end.txt ']' 09:27:54 + diff /tmp/packages_start.txt /tmp/packages_end.txt 09:27:54 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 09:27:54 + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 09:27:54 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 09:27:54 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins14261170868334125883.sh 09:27:54 ---> capture-instance-metadata.sh 09:27:54 Setup pyenv: 09:27:54 system 09:27:54 3.8.13 09:27:54 3.9.13 09:27:54 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:27:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fyfu from file:/tmp/.os_lf_venv 09:27:56 lf-activate-venv(): INFO: Installing: lftools 09:28:05 lf-activate-venv(): INFO: Adding /tmp/venv-fyfu/bin to PATH 09:28:05 INFO: Running in OpenStack, capturing instance metadata 09:28:05 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins4435938045642721425.sh 09:28:05 provisioning config files... 09:28:05 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/config228194828159060448tmp 09:28:05 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 09:28:05 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 09:28:05 [EnvInject] - Injecting environment variables from a build step. 09:28:05 [EnvInject] - Injecting as environment variables the properties content 09:28:05 SERVER_ID=logs 09:28:05 09:28:05 [EnvInject] - Variables injected successfully. 09:28:05 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins18290669995790226701.sh 09:28:05 ---> create-netrc.sh 09:28:05 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins7725780661284608220.sh 09:28:05 ---> python-tools-install.sh 09:28:05 Setup pyenv: 09:28:05 system 09:28:05 3.8.13 09:28:05 3.9.13 09:28:05 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:28:06 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fyfu from file:/tmp/.os_lf_venv 09:28:08 lf-activate-venv(): INFO: Installing: lftools 09:28:16 lf-activate-venv(): INFO: Adding /tmp/venv-fyfu/bin to PATH 09:28:16 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins13775300638380584834.sh 09:28:16 ---> sudo-logs.sh 09:28:16 Archiving 'sudo' log.. 09:28:16 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins7744297270769869229.sh 09:28:16 ---> job-cost.sh 09:28:16 Setup pyenv: 09:28:16 system 09:28:16 3.8.13 09:28:16 3.9.13 09:28:16 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:28:16 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fyfu from file:/tmp/.os_lf_venv 09:28:18 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 09:28:23 lf-activate-venv(): INFO: Adding /tmp/venv-fyfu/bin to PATH 09:28:23 INFO: No Stack... 09:28:23 INFO: Retrieving Pricing Info for: v3-standard-8 09:28:24 INFO: Archiving Costs 09:28:24 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash -l /tmp/jenkins15705491372243325332.sh 09:28:24 ---> logs-deploy.sh 09:28:24 Setup pyenv: 09:28:24 system 09:28:24 3.8.13 09:28:24 3.9.13 09:28:24 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:28:24 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-fyfu from file:/tmp/.os_lf_venv 09:28:26 lf-activate-venv(): INFO: Installing: lftools 09:28:34 lf-activate-venv(): INFO: Adding /tmp/venv-fyfu/bin to PATH 09:28:34 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-verify-opa-pdp/163 09:28:34 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 09:28:35 Archives upload complete. 09:28:35 INFO: archiving logs to Nexus 09:28:36 ---> uname -a: 09:28:36 Linux prd-ubuntu1804-docker-8c-8g-22280 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 09:28:36 09:28:36 09:28:36 ---> lscpu: 09:28:36 Architecture: x86_64 09:28:36 CPU op-mode(s): 32-bit, 64-bit 09:28:36 Byte Order: Little Endian 09:28:36 CPU(s): 8 09:28:36 On-line CPU(s) list: 0-7 09:28:36 Thread(s) per core: 1 09:28:36 Core(s) per socket: 1 09:28:36 Socket(s): 8 09:28:36 NUMA node(s): 1 09:28:36 Vendor ID: AuthenticAMD 09:28:36 CPU family: 23 09:28:36 Model: 49 09:28:36 Model name: AMD EPYC-Rome Processor 09:28:36 Stepping: 0 09:28:36 CPU MHz: 2800.000 09:28:36 BogoMIPS: 5600.00 09:28:36 Virtualization: AMD-V 09:28:36 Hypervisor vendor: KVM 09:28:36 Virtualization type: full 09:28:36 L1d cache: 32K 09:28:36 L1i cache: 32K 09:28:36 L2 cache: 512K 09:28:36 L3 cache: 16384K 09:28:36 NUMA node0 CPU(s): 0-7 09:28:36 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 09:28:36 09:28:36 09:28:36 ---> nproc: 09:28:36 8 09:28:36 09:28:36 09:28:36 ---> df -h: 09:28:36 Filesystem Size Used Avail Use% Mounted on 09:28:36 udev 16G 0 16G 0% /dev 09:28:36 tmpfs 3.2G 708K 3.2G 1% /run 09:28:36 /dev/vda1 155G 15G 141G 10% / 09:28:36 tmpfs 16G 0 16G 0% /dev/shm 09:28:36 tmpfs 5.0M 0 5.0M 0% /run/lock 09:28:36 tmpfs 16G 0 16G 0% /sys/fs/cgroup 09:28:36 /dev/vda15 105M 4.4M 100M 5% /boot/efi 09:28:36 tmpfs 3.2G 0 3.2G 0% /run/user/1001 09:28:36 09:28:36 09:28:36 ---> free -m: 09:28:36 total used free shared buff/cache available 09:28:36 Mem: 32167 881 24050 0 7234 30829 09:28:36 Swap: 1023 0 1023 09:28:36 09:28:36 09:28:36 ---> ip addr: 09:28:36 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 09:28:36 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 09:28:36 inet 127.0.0.1/8 scope host lo 09:28:36 valid_lft forever preferred_lft forever 09:28:36 inet6 ::1/128 scope host 09:28:36 valid_lft forever preferred_lft forever 09:28:36 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 09:28:36 link/ether fa:16:3e:8b:3d:3f brd ff:ff:ff:ff:ff:ff 09:28:36 inet 10.30.106.208/23 brd 10.30.107.255 scope global dynamic ens3 09:28:36 valid_lft 85804sec preferred_lft 85804sec 09:28:36 inet6 fe80::f816:3eff:fe8b:3d3f/64 scope link 09:28:36 valid_lft forever preferred_lft forever 09:28:36 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 09:28:36 link/ether 02:42:7c:c4:85:21 brd ff:ff:ff:ff:ff:ff 09:28:36 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 09:28:36 valid_lft forever preferred_lft forever 09:28:36 inet6 fe80::42:7cff:fec4:8521/64 scope link 09:28:36 valid_lft forever preferred_lft forever 09:28:36 09:28:36 09:28:36 ---> sar -b -r -n DEV: 09:28:36 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22280) 06/19/25 _x86_64_ (8 CPU) 09:28:36 09:28:36 09:18:42 LINUX RESTART (8 CPU) 09:28:36 09:28:36 09:19:01 tps rtps wtps bread/s bwrtn/s 09:28:36 09:20:01 365.96 66.69 299.27 4309.76 62337.89 09:28:36 09:21:06 679.87 23.06 656.81 2521.84 198172.37 09:28:36 09:22:01 57.26 0.11 57.15 7.85 19196.66 09:28:36 09:23:01 16.53 0.00 16.53 0.00 18191.63 09:28:36 09:24:01 17.53 0.20 17.33 29.60 17358.71 09:28:36 09:25:01 225.93 0.28 225.65 22.80 49419.50 09:28:36 09:26:01 17.93 0.00 17.93 0.00 18043.39 09:28:36 09:27:01 23.40 0.00 23.40 0.00 18178.84 09:28:36 09:28:01 64.16 1.52 62.64 47.33 14574.24 09:28:36 Average: 168.25 10.40 157.85 792.02 47617.38 09:28:36 09:28:36 09:19:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:28:36 09:20:01 30143828 31696964 2795392 8.49 68904 1795692 1393388 4.10 858932 1652012 177980 09:28:36 09:21:06 24559944 31063152 8379276 25.44 160964 6432260 6142624 18.07 1731092 6025368 5424 09:28:36 09:22:01 23389912 30060776 9549308 28.99 163568 6599908 7410984 21.80 2812772 6093876 2280 09:28:36 09:23:01 23388716 30041392 9550504 28.99 163724 6582248 7574792 22.29 2832136 6074756 512 09:28:36 09:24:01 23192892 29978576 9746328 29.59 171640 6698368 7942208 23.37 2914960 6175908 112928 09:28:36 09:25:01 22675220 29868964 10264000 31.16 204408 7029284 7979360 23.48 3122428 6437836 1892 09:28:36 09:26:01 22722616 29917412 10216604 31.02 204520 7029904 7934668 23.35 3078276 6432472 904 09:28:36 09:27:01 22718588 29913884 10220632 31.03 204616 7030096 7952848 23.40 3081852 6432268 236 09:28:36 09:28:01 24603828 31534764 8335392 25.31 205492 6760560 1676644 4.93 1518644 6185616 11548 09:28:36 Average: 24155060 30452876 8784160 26.67 171982 6217591 6223057 18.31 2439010 5723346 34856 09:28:36 09:28:36 09:19:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:28:36 09:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:20:01 ens3 567.33 378.16 1710.60 84.22 0.00 0.00 0.00 0.00 09:28:36 09:20:01 lo 2.00 2.00 0.23 0.23 0.00 0.00 0.00 0.00 09:28:36 09:21:06 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:21:06 veth497f151 15.60 21.50 2.28 3.04 0.00 0.00 0.00 0.00 09:28:36 09:21:06 vetha5707e6 0.00 0.23 0.00 0.02 0.00 0.00 0.00 0.00 09:28:36 09:21:06 ens3 1123.83 673.52 30876.33 59.84 0.00 0.00 0.00 0.00 09:28:36 09:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:22:01 veth497f151 145.83 163.59 27.73 25.62 0.00 0.00 0.00 0.00 09:28:36 09:22:01 vetha5707e6 0.16 0.27 0.01 0.01 0.00 0.00 0.00 0.00 09:28:36 09:22:01 ens3 74.99 57.01 340.48 7.78 0.00 0.00 0.00 0.00 09:28:36 09:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:23:01 veth497f151 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:23:01 vetha5707e6 0.35 0.32 0.04 0.90 0.00 0.00 0.00 0.00 09:28:36 09:23:01 ens3 1.43 1.33 0.24 0.79 0.00 0.00 0.00 0.00 09:28:36 09:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:24:01 veth497f151 98.38 98.85 24.73 11.20 0.00 0.00 0.00 0.00 09:28:36 09:24:01 vetha5707e6 0.60 0.63 0.06 1.16 0.00 0.00 0.00 0.00 09:28:36 09:24:01 ens3 51.32 29.80 848.80 3.00 0.00 0.00 0.00 0.00 09:28:36 09:25:01 docker0 128.28 176.95 8.32 1348.96 0.00 0.00 0.00 0.00 09:28:36 09:25:01 veth497f151 115.46 116.06 28.86 12.73 0.00 0.00 0.00 0.00 09:28:36 09:25:01 vetha5707e6 0.60 0.63 0.06 1.22 0.00 0.00 0.00 0.00 09:28:36 09:25:01 ens3 179.37 129.95 1349.42 10.73 0.00 0.00 0.00 0.00 09:28:36 09:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:26:01 veth497f151 594.97 597.60 144.58 64.41 0.00 0.00 0.00 0.01 09:28:36 09:26:01 vetha5707e6 0.62 0.63 0.06 1.28 0.00 0.00 0.00 0.00 09:28:36 09:26:01 ens3 0.78 0.67 0.12 0.28 0.00 0.00 0.00 0.00 09:28:36 09:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:27:01 veth497f151 139.03 139.64 33.61 14.96 0.00 0.00 0.00 0.00 09:28:36 09:27:01 vetha5707e6 0.60 0.58 0.06 1.28 0.00 0.00 0.00 0.00 09:28:36 09:27:01 ens3 0.90 0.82 0.15 0.31 0.00 0.00 0.00 0.00 09:28:36 09:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:28:36 09:28:01 ens3 44.28 33.91 64.97 30.59 0.00 0.00 0.00 0.00 09:28:36 09:28:01 lo 30.46 30.46 2.69 2.69 0.00 0.00 0.00 0.00 09:28:36 Average: docker0 14.27 19.69 0.93 150.08 0.00 0.00 0.00 0.00 09:28:36 Average: ens3 235.65 150.02 4156.66 22.38 0.00 0.00 0.00 0.00 09:28:36 Average: lo 3.03 3.03 0.27 0.27 0.00 0.00 0.00 0.00 09:28:36 09:28:36 09:28:36 ---> sar -P ALL: 09:28:36 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22280) 06/19/25 _x86_64_ (8 CPU) 09:28:36 09:28:36 09:18:42 LINUX RESTART (8 CPU) 09:28:36 09:28:36 09:19:01 CPU %user %nice %system %iowait %steal %idle 09:28:36 09:20:01 all 10.92 0.00 1.38 2.68 0.04 84.98 09:28:36 09:20:01 0 11.33 0.00 1.08 0.37 0.05 87.17 09:28:36 09:20:01 1 20.52 0.00 1.59 0.58 0.05 77.26 09:28:36 09:20:01 2 6.32 0.00 0.87 0.18 0.02 92.61 09:28:36 09:20:01 3 5.06 0.00 0.68 4.26 0.03 89.96 09:28:36 09:20:01 4 2.60 0.00 2.77 3.57 0.03 91.03 09:28:36 09:20:01 5 3.99 0.00 0.80 0.33 0.03 94.85 09:28:36 09:20:01 6 4.45 0.00 1.01 10.99 0.03 83.52 09:28:36 09:20:01 7 33.11 0.00 2.17 1.24 0.07 63.42 09:28:36 09:21:06 all 21.43 0.00 9.98 8.64 0.09 59.86 09:28:36 09:21:06 0 18.62 0.00 14.37 2.13 0.10 64.78 09:28:36 09:21:06 1 16.96 0.00 7.60 13.22 0.06 62.17 09:28:36 09:21:06 2 17.64 0.00 8.80 3.18 0.10 70.27 09:28:36 09:21:06 3 30.19 0.00 8.98 10.16 0.10 50.57 09:28:36 09:21:06 4 19.22 0.00 9.14 5.91 0.07 65.66 09:28:36 09:21:06 5 18.76 0.00 9.39 14.20 0.09 57.57 09:28:36 09:21:06 6 28.88 0.00 9.94 17.07 0.13 43.97 09:28:36 09:21:06 7 21.21 0.00 11.50 3.41 0.08 63.80 09:28:36 09:22:01 all 26.26 0.00 3.74 8.55 0.30 61.15 09:28:36 09:22:01 0 31.35 0.00 2.87 0.49 0.26 65.03 09:28:36 09:22:01 1 18.19 0.00 2.35 15.76 0.33 63.38 09:28:36 09:22:01 2 33.42 0.00 6.74 0.07 0.16 59.60 09:28:36 09:22:01 3 24.73 0.00 3.01 0.22 0.11 71.92 09:28:36 09:22:01 4 28.42 0.00 3.78 10.49 0.48 56.83 09:28:36 09:22:01 5 15.34 0.00 2.22 24.53 0.53 57.38 09:28:36 09:22:01 6 22.81 0.00 5.61 15.71 0.15 55.72 09:28:36 09:22:01 7 35.87 0.00 3.20 0.97 0.32 59.64 09:28:36 09:23:01 all 1.31 0.00 0.16 0.32 0.05 98.15 09:28:36 09:23:01 0 1.10 0.00 0.13 0.00 0.07 98.70 09:28:36 09:23:01 1 1.17 0.00 0.15 0.00 0.03 98.65 09:28:36 09:23:01 2 1.47 0.00 0.13 0.00 0.05 98.35 09:28:36 09:23:01 3 1.41 0.00 0.13 0.00 0.07 98.39 09:28:36 09:23:01 4 1.67 0.00 0.18 0.00 0.03 98.12 09:28:36 09:23:01 5 1.18 0.00 0.18 0.00 0.05 98.58 09:28:36 09:23:01 6 1.50 0.00 0.18 2.32 0.08 95.90 09:28:36 09:23:01 7 0.98 0.00 0.13 0.23 0.03 98.61 09:28:36 09:24:01 all 2.31 0.00 0.45 2.55 0.03 94.65 09:28:36 09:24:01 0 2.29 0.00 0.40 0.93 0.03 96.35 09:28:36 09:24:01 1 2.24 0.00 0.47 0.03 0.05 97.21 09:28:36 09:24:01 2 3.26 0.00 0.60 0.00 0.03 96.11 09:28:36 09:24:01 3 2.01 0.00 0.49 0.12 0.03 97.35 09:28:36 09:24:01 4 2.32 0.00 0.30 0.02 0.03 97.33 09:28:36 09:24:01 5 2.23 0.00 0.55 0.32 0.02 96.88 09:28:36 09:24:01 6 1.57 0.00 0.45 11.47 0.05 86.46 09:28:36 09:24:01 7 2.59 0.00 0.38 7.57 0.03 89.43 09:28:36 09:25:01 all 9.01 0.00 2.44 6.03 0.11 82.41 09:28:36 09:25:01 0 9.72 0.00 2.76 5.49 0.10 81.93 09:28:36 09:25:01 1 5.50 0.00 1.98 0.74 0.08 91.71 09:28:36 09:25:01 2 14.64 0.00 3.04 2.92 0.13 79.27 09:28:36 09:25:01 3 5.59 0.00 1.92 11.25 0.10 81.14 09:28:36 09:25:01 4 8.12 0.00 1.74 0.03 0.08 90.02 09:28:36 09:25:01 5 9.79 0.00 2.82 11.05 0.13 76.20 09:28:36 09:25:01 6 6.90 0.00 2.63 7.73 0.13 82.60 09:28:36 09:25:01 7 11.81 0.00 2.63 9.05 0.10 76.41 09:28:36 09:26:01 all 3.34 0.00 0.55 0.58 0.05 95.47 09:28:36 09:26:01 0 3.02 0.00 0.32 0.00 0.05 96.61 09:28:36 09:26:01 1 2.90 0.00 0.83 0.05 0.17 96.04 09:28:36 09:26:01 2 3.41 0.00 0.99 0.05 0.05 95.50 09:28:36 09:26:01 3 2.32 0.00 0.59 4.06 0.03 93.00 09:28:36 09:26:01 4 3.64 0.00 0.53 0.02 0.05 95.76 09:28:36 09:26:01 5 3.30 0.00 0.55 0.53 0.03 95.58 09:28:36 09:26:01 6 3.53 0.00 0.32 0.00 0.03 96.12 09:28:36 09:26:01 7 4.60 0.00 0.32 0.00 0.00 95.08 09:28:36 09:27:01 all 1.04 0.00 0.25 0.68 0.03 98.00 09:28:36 09:27:01 0 1.59 0.00 0.23 0.00 0.02 98.16 09:28:36 09:27:01 1 1.02 0.00 0.20 0.02 0.03 98.73 09:28:36 09:27:01 2 1.29 0.00 0.52 0.03 0.05 98.11 09:28:36 09:27:01 3 0.96 0.00 0.42 5.13 0.03 93.46 09:28:36 09:27:01 4 1.35 0.00 0.22 0.00 0.02 98.41 09:28:36 09:27:01 5 0.67 0.00 0.18 0.23 0.02 98.90 09:28:36 09:27:01 6 0.60 0.00 0.18 0.00 0.03 99.18 09:28:36 09:27:01 7 0.90 0.00 0.07 0.02 0.02 99.00 09:28:36 09:28:01 all 2.87 0.00 0.70 0.48 0.03 95.92 09:28:36 09:28:01 0 1.64 0.00 0.68 0.35 0.02 97.31 09:28:36 09:28:01 1 1.47 0.00 0.63 0.02 0.02 97.86 09:28:36 09:28:01 2 1.45 0.00 0.82 0.23 0.03 97.46 09:28:36 09:28:01 3 4.89 0.00 0.84 2.76 0.05 91.46 09:28:36 09:28:01 4 2.59 0.00 0.48 0.05 0.02 96.86 09:28:36 09:28:01 5 1.35 0.00 0.68 0.23 0.03 97.69 09:28:36 09:28:01 6 8.44 0.00 0.82 0.07 0.05 90.62 09:28:36 09:28:01 7 1.15 0.00 0.63 0.10 0.03 98.08 09:28:36 Average: all 8.36 0.00 2.16 3.29 0.08 86.12 09:28:36 Average: 0 8.80 0.00 2.60 1.09 0.07 87.43 09:28:36 Average: 1 7.55 0.00 1.78 3.11 0.09 87.48 09:28:36 Average: 2 8.96 0.00 2.44 0.74 0.07 87.78 09:28:36 Average: 3 8.18 0.00 1.88 4.34 0.06 85.53 09:28:36 Average: 4 7.13 0.00 2.07 1.98 0.08 88.74 09:28:36 Average: 5 6.15 0.00 1.90 5.49 0.10 86.35 09:28:36 Average: 6 8.38 0.00 2.26 7.04 0.08 82.24 09:28:36 Average: 7 11.76 0.00 2.31 2.55 0.07 83.31 09:28:36 09:28:36 09:28:36