09:40:32 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141341 09:40:32 Running as SYSTEM 09:40:32 [EnvInject] - Loading node environment variables. 09:40:32 Building remotely on prd-ubuntu1804-docker-8c-8g-22297 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 09:40:32 [ssh-agent] Looking for ssh-agent implementation... 09:40:32 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 09:40:32 $ ssh-agent 09:40:32 SSH_AUTH_SOCK=/tmp/ssh-wWiiM8ppiwIY/agent.2106 09:40:32 SSH_AGENT_PID=2108 09:40:32 [ssh-agent] Started. 09:40:32 Running ssh-add (command line suppressed) 09:40:32 Identity added: /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_16435614500756322687.key (/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_16435614500756322687.key) 09:40:32 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 09:40:32 The recommended git tool is: NONE 09:40:34 using credential onap-jenkins-ssh 09:40:34 Wiping out workspace first. 09:40:34 Cloning the remote Git repository 09:40:34 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 09:40:34 > git init /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp # timeout=10 09:40:34 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 09:40:34 > git --version # timeout=10 09:40:34 > git --version # 'git version 2.17.1' 09:40:34 using GIT_SSH to set credentials Gerrit user 09:40:34 Verifying host key using manually-configured host key entries 09:40:34 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 09:40:35 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 09:40:35 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 09:40:35 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 09:40:35 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 09:40:35 using GIT_SSH to set credentials Gerrit user 09:40:35 Verifying host key using manually-configured host key entries 09:40:35 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/41/141341/2 # timeout=30 09:40:35 > git rev-parse 8b99874d0fe646f509546f6b38b185b8f089ba50^{commit} # timeout=10 09:40:35 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 09:40:35 Checking out Revision 8b99874d0fe646f509546f6b38b185b8f089ba50 (refs/changes/41/141341/2) 09:40:35 > git config core.sparsecheckout # timeout=10 09:40:35 > git checkout -f 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=30 09:40:38 Commit message: "Add missing delete composition in CSIT" 09:40:38 > git rev-parse FETCH_HEAD^{commit} # timeout=10 09:40:38 > git rev-list --no-walk a4383ddb08daf12bc481139efd90352bfa803726 # timeout=10 09:40:39 provisioning config files... 09:40:39 copy managed file [npmrc] to file:/home/jenkins/.npmrc 09:40:39 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 09:40:39 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins1951626160514922982.sh 09:40:39 ---> python-tools-install.sh 09:40:39 Setup pyenv: 09:40:39 * system (set by /opt/pyenv/version) 09:40:39 * 3.8.13 (set by /opt/pyenv/version) 09:40:39 * 3.9.13 (set by /opt/pyenv/version) 09:40:39 * 3.10.6 (set by /opt/pyenv/version) 09:40:43 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ZZzF 09:40:43 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 09:40:48 lf-activate-venv(): INFO: Installing: lftools 09:41:13 lf-activate-venv(): INFO: Adding /tmp/venv-ZZzF/bin to PATH 09:41:13 Generating Requirements File 09:41:34 Python 3.10.6 09:41:34 pip 25.1.1 from /tmp/venv-ZZzF/lib/python3.10/site-packages/pip (python 3.10) 09:41:35 appdirs==1.4.4 09:41:35 argcomplete==3.6.2 09:41:35 aspy.yaml==1.3.0 09:41:35 attrs==25.3.0 09:41:35 autopage==0.5.2 09:41:35 beautifulsoup4==4.13.4 09:41:35 boto3==1.38.39 09:41:35 botocore==1.38.39 09:41:35 bs4==0.0.2 09:41:35 cachetools==5.5.2 09:41:35 certifi==2025.6.15 09:41:35 cffi==1.17.1 09:41:35 cfgv==3.4.0 09:41:35 chardet==5.2.0 09:41:35 charset-normalizer==3.4.2 09:41:35 click==8.2.1 09:41:35 cliff==4.10.0 09:41:35 cmd2==2.6.1 09:41:35 cryptography==3.3.2 09:41:35 debtcollector==3.0.0 09:41:35 decorator==5.2.1 09:41:35 defusedxml==0.7.1 09:41:35 Deprecated==1.2.18 09:41:35 distlib==0.3.9 09:41:35 dnspython==2.7.0 09:41:35 docker==7.1.0 09:41:35 dogpile.cache==1.4.0 09:41:35 durationpy==0.10 09:41:35 email_validator==2.2.0 09:41:35 filelock==3.18.0 09:41:35 future==1.0.0 09:41:35 gitdb==4.0.12 09:41:35 GitPython==3.1.44 09:41:35 google-auth==2.40.3 09:41:35 httplib2==0.22.0 09:41:35 identify==2.6.12 09:41:35 idna==3.10 09:41:35 importlib-resources==1.5.0 09:41:35 iso8601==2.1.0 09:41:35 Jinja2==3.1.6 09:41:35 jmespath==1.0.1 09:41:35 jsonpatch==1.33 09:41:35 jsonpointer==3.0.0 09:41:35 jsonschema==4.24.0 09:41:35 jsonschema-specifications==2025.4.1 09:41:35 keystoneauth1==5.11.1 09:41:35 kubernetes==33.1.0 09:41:35 lftools==0.37.13 09:41:35 lxml==5.4.0 09:41:35 MarkupSafe==3.0.2 09:41:35 msgpack==1.1.1 09:41:35 multi_key_dict==2.0.3 09:41:35 munch==4.0.0 09:41:35 netaddr==1.3.0 09:41:35 niet==1.4.2 09:41:35 nodeenv==1.9.1 09:41:35 oauth2client==4.1.3 09:41:35 oauthlib==3.3.0 09:41:35 openstacksdk==4.6.0 09:41:35 os-client-config==2.1.0 09:41:35 os-service-types==1.7.0 09:41:35 osc-lib==4.0.2 09:41:35 oslo.config==9.8.0 09:41:35 oslo.context==6.0.0 09:41:35 oslo.i18n==6.5.1 09:41:35 oslo.log==7.1.0 09:41:35 oslo.serialization==5.7.0 09:41:35 oslo.utils==9.0.0 09:41:35 packaging==25.0 09:41:35 pbr==6.1.1 09:41:35 platformdirs==4.3.8 09:41:35 prettytable==3.16.0 09:41:35 psutil==7.0.0 09:41:35 pyasn1==0.6.1 09:41:35 pyasn1_modules==0.4.2 09:41:35 pycparser==2.22 09:41:35 pygerrit2==2.0.15 09:41:35 PyGithub==2.6.1 09:41:35 PyJWT==2.10.1 09:41:35 PyNaCl==1.5.0 09:41:35 pyparsing==2.4.7 09:41:35 pyperclip==1.9.0 09:41:35 pyrsistent==0.20.0 09:41:35 python-cinderclient==9.7.0 09:41:35 python-dateutil==2.9.0.post0 09:41:35 python-heatclient==4.2.0 09:41:35 python-jenkins==1.8.2 09:41:35 python-keystoneclient==5.6.0 09:41:35 python-magnumclient==4.8.1 09:41:35 python-openstackclient==8.1.0 09:41:35 python-swiftclient==4.8.0 09:41:35 PyYAML==6.0.2 09:41:35 referencing==0.36.2 09:41:35 requests==2.32.4 09:41:35 requests-oauthlib==2.0.0 09:41:35 requestsexceptions==1.4.0 09:41:35 rfc3986==2.0.0 09:41:35 rpds-py==0.25.1 09:41:35 rsa==4.9.1 09:41:35 ruamel.yaml==0.18.14 09:41:35 ruamel.yaml.clib==0.2.12 09:41:35 s3transfer==0.13.0 09:41:35 simplejson==3.20.1 09:41:35 six==1.17.0 09:41:35 smmap==5.0.2 09:41:35 soupsieve==2.7 09:41:35 stevedore==5.4.1 09:41:35 tabulate==0.9.0 09:41:35 toml==0.10.2 09:41:35 tomlkit==0.13.3 09:41:35 tqdm==4.67.1 09:41:35 typing_extensions==4.14.0 09:41:35 tzdata==2025.2 09:41:35 urllib3==1.26.20 09:41:35 virtualenv==20.31.2 09:41:35 wcwidth==0.2.13 09:41:35 websocket-client==1.8.0 09:41:35 wrapt==1.17.2 09:41:35 xdg==6.0.0 09:41:35 xmltodict==0.14.2 09:41:35 yq==3.4.3 09:41:35 [EnvInject] - Injecting environment variables from a build step. 09:41:35 [EnvInject] - Injecting as environment variables the properties content 09:41:35 SET_JDK_VERSION=openjdk17 09:41:35 GIT_URL="git://cloud.onap.org/mirror" 09:41:35 09:41:35 [EnvInject] - Variables injected successfully. 09:41:35 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh /tmp/jenkins2827490652441491614.sh 09:41:35 ---> update-java-alternatives.sh 09:41:35 ---> Updating Java version 09:41:35 ---> Ubuntu/Debian system detected 09:41:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 09:41:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 09:41:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 09:41:36 openjdk version "17.0.4" 2022-07-19 09:41:36 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 09:41:36 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 09:41:36 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 09:41:36 [EnvInject] - Injecting environment variables from a build step. 09:41:36 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 09:41:36 [EnvInject] - Variables injected successfully. 09:41:36 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh -xe /tmp/jenkins14759613468511154202.sh 09:41:36 + /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/run-project-csit.sh opa-pdp 09:41:36 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 09:41:36 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 09:41:36 Configure a credential helper to remove this warning. See 09:41:36 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 09:41:36 09:41:36 Login Succeeded 09:41:36 docker: 'compose' is not a docker command. 09:41:36 See 'docker --help' 09:41:36 Docker Compose Plugin not installed. Installing now... 09:41:36 % Total % Received % Xferd Average Speed Time Time Time Current 09:41:36 Dload Upload Total Spent Left Speed 09:41:36 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 09:41:37 100 60.2M 100 60.2M 0 0 79.0M 0 --:--:-- --:--:-- --:--:-- 79.0M 09:41:37 Setting project configuration for: opa-pdp 09:41:37 Configuring docker compose... 09:41:39 Starting opa-pdp using postgres + Grafana/Prometheus 09:41:39 opa-pdp Pulling 09:41:39 grafana Pulling 09:41:39 policy-db-migrator Pulling 09:41:39 prometheus Pulling 09:41:39 postgres Pulling 09:41:39 kafka Pulling 09:41:39 pap Pulling 09:41:39 api Pulling 09:41:39 zookeeper Pulling 09:41:39 da9db072f522 Pulling fs layer 09:41:39 96e38c8865ba Pulling fs layer 09:41:39 5e06c6bed798 Pulling fs layer 09:41:39 684be6598fc9 Pulling fs layer 09:41:39 0d92cad902ba Pulling fs layer 09:41:39 dcc0c3b2850c Pulling fs layer 09:41:39 eb7cda286a15 Pulling fs layer 09:41:39 0d92cad902ba Waiting 09:41:39 dcc0c3b2850c Waiting 09:41:39 eb7cda286a15 Waiting 09:41:39 684be6598fc9 Waiting 09:41:39 da9db072f522 Pulling fs layer 09:41:39 96e38c8865ba Pulling fs layer 09:41:39 e5d7009d9e55 Pulling fs layer 09:41:39 1ec5fb03eaee Pulling fs layer 09:41:39 d3165a332ae3 Pulling fs layer 09:41:39 e5d7009d9e55 Waiting 09:41:39 c124ba1a8b26 Pulling fs layer 09:41:39 6394804c2196 Pulling fs layer 09:41:39 1ec5fb03eaee Waiting 09:41:39 d3165a332ae3 Waiting 09:41:39 c124ba1a8b26 Waiting 09:41:39 6394804c2196 Waiting 09:41:39 5e06c6bed798 Downloading [==================================================>] 296B/296B 09:41:39 5e06c6bed798 Verifying Checksum 09:41:39 5e06c6bed798 Download complete 09:41:39 f90c8eb4724c Pulling fs layer 09:41:39 2b1b549e99de Pulling fs layer 09:41:39 547372ea8ffa Pulling fs layer 09:41:39 65d25c0f02f3 Pulling fs layer 09:41:39 90dd78f85976 Pulling fs layer 09:41:39 4f4fb700ef54 Pulling fs layer 09:41:39 f90c8eb4724c Waiting 09:41:39 2b1b549e99de Waiting 09:41:39 65d25c0f02f3 Waiting 09:41:39 547372ea8ffa Waiting 09:41:39 90dd78f85976 Waiting 09:41:39 4f4fb700ef54 Waiting 09:41:39 da9db072f522 Downloading [> ] 48.06kB/3.624MB 09:41:39 da9db072f522 Downloading [> ] 48.06kB/3.624MB 09:41:39 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 09:41:39 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 09:41:39 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 09:41:39 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 09:41:39 684be6598fc9 Download complete 09:41:39 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 09:41:39 0d92cad902ba Verifying Checksum 09:41:39 0d92cad902ba Download complete 09:41:39 da9db072f522 Pulling fs layer 09:41:39 dee817a0a6b2 Pulling fs layer 09:41:39 c2847a4f1b5b Pulling fs layer 09:41:39 a1585586470a Pulling fs layer 09:41:39 6360c234d368 Pulling fs layer 09:41:39 da9db072f522 Downloading [> ] 48.06kB/3.624MB 09:41:39 dee817a0a6b2 Waiting 09:41:39 c2847a4f1b5b Waiting 09:41:39 6216bab4c089 Pulling fs layer 09:41:39 d3f17694db5b Pulling fs layer 09:41:39 0174546cf409 Pulling fs layer 09:41:39 6360c234d368 Waiting 09:41:39 6216bab4c089 Waiting 09:41:39 da9db072f522 Download complete 09:41:39 da9db072f522 Download complete 09:41:39 da9db072f522 Download complete 09:41:39 da9db072f522 Extracting [> ] 65.54kB/3.624MB 09:41:39 da9db072f522 Extracting [> ] 65.54kB/3.624MB 09:41:39 da9db072f522 Extracting [> ] 65.54kB/3.624MB 09:41:39 1e017ebebdbd Pulling fs layer 09:41:39 55f2b468da67 Pulling fs layer 09:41:39 82bfc142787e Pulling fs layer 09:41:39 46baca71a4ef Pulling fs layer 09:41:39 b0e0ef7895f4 Pulling fs layer 09:41:39 c0c90eeb8aca Pulling fs layer 09:41:39 5cfb27c10ea5 Pulling fs layer 09:41:39 40a5eed61bb0 Pulling fs layer 09:41:39 e040ea11fa10 Pulling fs layer 09:41:39 09d5a3f70313 Pulling fs layer 09:41:39 356f5c2c843b Pulling fs layer 09:41:39 46baca71a4ef Waiting 09:41:39 1e017ebebdbd Waiting 09:41:39 55f2b468da67 Waiting 09:41:39 82bfc142787e Waiting 09:41:39 c0c90eeb8aca Waiting 09:41:39 5cfb27c10ea5 Waiting 09:41:39 40a5eed61bb0 Waiting 09:41:39 09d5a3f70313 Waiting 09:41:39 356f5c2c843b Waiting 09:41:39 b0e0ef7895f4 Waiting 09:41:39 e040ea11fa10 Waiting 09:41:39 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 09:41:39 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 09:41:39 eb7cda286a15 Verifying Checksum 09:41:39 eb7cda286a15 Download complete 09:41:39 9fa9226be034 Pulling fs layer 09:41:39 1617e25568b2 Pulling fs layer 09:41:39 6ac0e4adf315 Pulling fs layer 09:41:39 f3b09c502777 Pulling fs layer 09:41:39 408012a7b118 Pulling fs layer 09:41:39 44986281b8b9 Pulling fs layer 09:41:39 bf70c5107ab5 Pulling fs layer 09:41:39 1ccde423731d Pulling fs layer 09:41:39 7221d93db8a9 Pulling fs layer 09:41:39 7df673c7455d Pulling fs layer 09:41:39 44986281b8b9 Waiting 09:41:39 9fa9226be034 Waiting 09:41:39 bf70c5107ab5 Waiting 09:41:39 1ccde423731d Waiting 09:41:39 1617e25568b2 Waiting 09:41:39 7221d93db8a9 Waiting 09:41:39 6ac0e4adf315 Waiting 09:41:39 f3b09c502777 Waiting 09:41:39 408012a7b118 Waiting 09:41:39 7df673c7455d Waiting 09:41:39 eca0188f477e Pulling fs layer 09:41:39 e444bcd4d577 Pulling fs layer 09:41:39 eabd8714fec9 Pulling fs layer 09:41:39 45fd2fec8a19 Pulling fs layer 09:41:39 8f10199ed94b Pulling fs layer 09:41:39 f963a77d2726 Pulling fs layer 09:41:39 f3a82e9f1761 Pulling fs layer 09:41:39 79161a3f5362 Pulling fs layer 09:41:39 9c266ba63f51 Pulling fs layer 09:41:39 2e8a7df9c2ee Pulling fs layer 09:41:39 10f05dd8b1db Pulling fs layer 09:41:39 41dac8b43ba6 Pulling fs layer 09:41:39 f963a77d2726 Waiting 09:41:39 79161a3f5362 Waiting 09:41:39 f3a82e9f1761 Waiting 09:41:39 eca0188f477e Waiting 09:41:39 10f05dd8b1db Waiting 09:41:39 e444bcd4d577 Waiting 09:41:39 9c266ba63f51 Waiting 09:41:39 45fd2fec8a19 Waiting 09:41:39 2e8a7df9c2ee Waiting 09:41:39 41dac8b43ba6 Waiting 09:41:39 eabd8714fec9 Waiting 09:41:39 71a9f6a9ab4d Pulling fs layer 09:41:39 da3ed5db7103 Pulling fs layer 09:41:39 c955f6e31a04 Pulling fs layer 09:41:39 71a9f6a9ab4d Waiting 09:41:39 da3ed5db7103 Waiting 09:41:39 c955f6e31a04 Waiting 09:41:39 f18232174bc9 Pulling fs layer 09:41:39 9183b65e90ee Pulling fs layer 09:41:39 3f8d5c908dcc Pulling fs layer 09:41:39 30bb92ff0608 Pulling fs layer 09:41:39 807a2e881ecd Pulling fs layer 09:41:39 4a4d0948b0bf Pulling fs layer 09:41:39 9183b65e90ee Waiting 09:41:39 04f6155c873d Pulling fs layer 09:41:39 85dde7dceb0a Pulling fs layer 09:41:39 7009d5001b77 Pulling fs layer 09:41:39 538deb30e80c Pulling fs layer 09:41:39 3f8d5c908dcc Waiting 09:41:39 04f6155c873d Waiting 09:41:39 30bb92ff0608 Waiting 09:41:39 807a2e881ecd Waiting 09:41:39 4a4d0948b0bf Waiting 09:41:39 85dde7dceb0a Waiting 09:41:39 f18232174bc9 Waiting 09:41:39 538deb30e80c Waiting 09:41:39 7009d5001b77 Waiting 09:41:39 e5d7009d9e55 Downloading [==================================================>] 295B/295B 09:41:39 e5d7009d9e55 Verifying Checksum 09:41:39 e5d7009d9e55 Download complete 09:41:39 2d429b9e73a6 Pulling fs layer 09:41:39 46eab5b44a35 Pulling fs layer 09:41:39 c4d302cc468d Pulling fs layer 09:41:39 2d429b9e73a6 Waiting 09:41:39 01e0882c90d9 Pulling fs layer 09:41:39 531ee2cf3c0c Pulling fs layer 09:41:39 ed54a7dee1d8 Pulling fs layer 09:41:39 12c5c803443f Pulling fs layer 09:41:39 e27c75a98748 Pulling fs layer 09:41:39 e73cb4a42719 Pulling fs layer 09:41:39 a83b68436f09 Pulling fs layer 09:41:39 787d6bee9571 Pulling fs layer 09:41:39 13ff0988aaea Pulling fs layer 09:41:39 4b82842ab819 Pulling fs layer 09:41:39 46eab5b44a35 Waiting 09:41:39 7e568a0dc8fb Pulling fs layer 09:41:39 c4d302cc468d Waiting 09:41:39 ed54a7dee1d8 Waiting 09:41:39 531ee2cf3c0c Waiting 09:41:39 a83b68436f09 Waiting 09:41:39 01e0882c90d9 Waiting 09:41:39 787d6bee9571 Waiting 09:41:39 13ff0988aaea Waiting 09:41:39 7e568a0dc8fb Waiting 09:41:39 4b82842ab819 Waiting 09:41:39 e27c75a98748 Waiting 09:41:39 e73cb4a42719 Waiting 09:41:39 12c5c803443f Waiting 09:41:39 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 09:41:39 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 09:41:39 1ec5fb03eaee Download complete 09:41:39 96e38c8865ba Downloading [======> ] 9.19MB/71.91MB 09:41:39 96e38c8865ba Downloading [======> ] 9.19MB/71.91MB 09:41:39 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 09:41:39 d3165a332ae3 Download complete 09:41:39 dcc0c3b2850c Downloading [=====> ] 8.65MB/76.12MB 09:41:39 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 09:41:39 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 09:41:39 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 09:41:39 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 09:41:39 96e38c8865ba Downloading [===============> ] 21.63MB/71.91MB 09:41:39 96e38c8865ba Downloading [===============> ] 21.63MB/71.91MB 09:41:40 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 09:41:40 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 09:41:40 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 09:41:40 da9db072f522 Pull complete 09:41:40 da9db072f522 Pull complete 09:41:40 da9db072f522 Pull complete 09:41:40 c124ba1a8b26 Downloading [=> ] 2.702MB/91.87MB 09:41:40 dcc0c3b2850c Downloading [===========> ] 17.84MB/76.12MB 09:41:40 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 09:41:40 96e38c8865ba Downloading [==================> ] 26.49MB/71.91MB 09:41:42 c124ba1a8b26 Downloading [=> ] 3.243MB/91.87MB 09:41:48 dcc0c3b2850c Downloading [============> ] 18.38MB/76.12MB 09:41:48 96e38c8865ba Downloading [==================> ] 27.03MB/71.91MB 09:41:48 96e38c8865ba Downloading [==================> ] 27.03MB/71.91MB 09:41:48 dcc0c3b2850c Downloading [============> ] 18.92MB/76.12MB 09:41:48 c124ba1a8b26 Downloading [==> ] 3.784MB/91.87MB 09:41:48 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 09:41:48 96e38c8865ba Downloading [=============================> ] 42.71MB/71.91MB 09:41:48 dcc0c3b2850c Downloading [=====================> ] 32.98MB/76.12MB 09:41:49 c124ba1a8b26 Downloading [======> ] 12.43MB/91.87MB 09:41:49 dcc0c3b2850c Downloading [=======================> ] 35.68MB/76.12MB 09:41:49 96e38c8865ba Downloading [===============================> ] 45.96MB/71.91MB 09:41:49 96e38c8865ba Downloading [===============================> ] 45.96MB/71.91MB 09:41:50 dcc0c3b2850c Downloading [========================> ] 37.85MB/76.12MB 09:41:54 dcc0c3b2850c Downloading [=========================> ] 38.39MB/76.12MB 09:41:54 96e38c8865ba Downloading [=================================> ] 48.12MB/71.91MB 09:41:54 96e38c8865ba Downloading [=================================> ] 48.12MB/71.91MB 09:41:54 c124ba1a8b26 Downloading [=======> ] 14.06MB/91.87MB 09:41:54 dcc0c3b2850c Downloading [=================================> ] 51.36MB/76.12MB 09:41:54 96e38c8865ba Downloading [===========================================> ] 62.18MB/71.91MB 09:41:54 96e38c8865ba Downloading [===========================================> ] 62.18MB/71.91MB 09:41:54 c124ba1a8b26 Downloading [===============> ] 28.65MB/91.87MB 09:41:54 96e38c8865ba Verifying Checksum 09:41:54 96e38c8865ba Download complete 09:41:54 96e38c8865ba Verifying Checksum 09:41:54 96e38c8865ba Download complete 09:41:54 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 09:41:54 6394804c2196 Verifying Checksum 09:41:54 6394804c2196 Download complete 09:41:54 dcc0c3b2850c Downloading [===========================================> ] 65.96MB/76.12MB 09:41:54 f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 09:41:54 c124ba1a8b26 Downloading [=======================> ] 42.71MB/91.87MB 09:41:55 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 09:41:55 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 09:41:55 dcc0c3b2850c Verifying Checksum 09:41:55 dcc0c3b2850c Download complete 09:41:55 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 09:41:55 f90c8eb4724c Downloading [=============> ] 8.404MB/30.59MB 09:41:55 c124ba1a8b26 Downloading [===============================> ] 57.85MB/91.87MB 09:41:55 2b1b549e99de Verifying Checksum 09:41:55 2b1b549e99de Download complete 09:41:55 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 09:41:55 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 09:41:55 547372ea8ffa Downloading [> ] 130kB/12.63MB 09:41:55 f90c8eb4724c Downloading [==================================> ] 20.86MB/30.59MB 09:41:55 c124ba1a8b26 Downloading [=======================================> ] 72.99MB/91.87MB 09:41:55 f90c8eb4724c Verifying Checksum 09:41:55 f90c8eb4724c Download complete 09:41:55 547372ea8ffa Downloading [===========================> ] 6.946MB/12.63MB 09:41:55 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 09:41:55 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 09:41:55 c124ba1a8b26 Downloading [===============================================> ] 87.59MB/91.87MB 09:41:55 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 09:41:55 c124ba1a8b26 Verifying Checksum 09:41:55 c124ba1a8b26 Download complete 09:41:55 547372ea8ffa Verifying Checksum 09:41:55 547372ea8ffa Download complete 09:41:55 4f4fb700ef54 Downloading [==================================================>] 32B/32B 09:41:55 4f4fb700ef54 Verifying Checksum 09:41:55 4f4fb700ef54 Download complete 09:41:55 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 09:41:55 dee817a0a6b2 Downloading [> ] 539.6kB/71.86MB 09:41:55 f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 09:41:55 96e38c8865ba Extracting [=========> ] 13.37MB/71.91MB 09:41:55 96e38c8865ba Extracting [=========> ] 13.37MB/71.91MB 09:41:55 65d25c0f02f3 Downloading [================> ] 9.731MB/28.98MB 09:41:55 90dd78f85976 Downloading [=======> ] 6.389MB/41.49MB 09:41:55 dee817a0a6b2 Downloading [=====> ] 7.568MB/71.86MB 09:41:55 f90c8eb4724c Extracting [=====> ] 3.604MB/30.59MB 09:41:55 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 09:41:55 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 09:41:55 65d25c0f02f3 Downloading [===================================> ] 20.64MB/28.98MB 09:41:55 90dd78f85976 Downloading [=====================> ] 17.46MB/41.49MB 09:41:55 65d25c0f02f3 Verifying Checksum 09:41:55 65d25c0f02f3 Download complete 09:41:55 dee817a0a6b2 Downloading [=============> ] 19.46MB/71.86MB 09:41:55 f90c8eb4724c Extracting [==========> ] 6.226MB/30.59MB 09:41:55 96e38c8865ba Extracting [=================> ] 24.51MB/71.91MB 09:41:55 96e38c8865ba Extracting [=================> ] 24.51MB/71.91MB 09:41:55 c2847a4f1b5b Downloading [> ] 146.4kB/14.64MB 09:41:55 90dd78f85976 Downloading [====================================> ] 30.67MB/41.49MB 09:41:55 dee817a0a6b2 Downloading [=======================> ] 34.06MB/71.86MB 09:41:55 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 09:41:55 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 09:41:55 f90c8eb4724c Extracting [=============> ] 8.52MB/30.59MB 09:41:55 90dd78f85976 Verifying Checksum 09:41:55 90dd78f85976 Download complete 09:41:55 c2847a4f1b5b Downloading [========================> ] 7.224MB/14.64MB 09:41:55 a1585586470a Downloading [==================================================>] 1.073kB/1.073kB 09:41:55 a1585586470a Verifying Checksum 09:41:55 a1585586470a Download complete 09:41:55 dee817a0a6b2 Downloading [================================> ] 46.5MB/71.86MB 09:41:55 6360c234d368 Downloading [============================> ] 3.003kB/5.24kB 09:41:55 6360c234d368 Downloading [==================================================>] 5.24kB/5.24kB 09:41:55 6360c234d368 Verifying Checksum 09:41:55 6360c234d368 Download complete 09:41:55 6216bab4c089 Downloading [==================================================>] 1.034kB/1.034kB 09:41:55 6216bab4c089 Download complete 09:41:55 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 09:41:55 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 09:41:55 c2847a4f1b5b Download complete 09:41:55 d3f17694db5b Downloading [==================================================>] 1.034kB/1.034kB 09:41:55 d3f17694db5b Verifying Checksum 09:41:55 d3f17694db5b Download complete 09:41:55 0174546cf409 Downloading [=======> ] 3.002kB/19.52kB 09:41:55 0174546cf409 Downloading [==================================================>] 19.52kB/19.52kB 09:41:55 0174546cf409 Verifying Checksum 09:41:55 0174546cf409 Download complete 09:41:55 f90c8eb4724c Extracting [==================> ] 11.47MB/30.59MB 09:41:55 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 09:41:55 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 09:41:55 dee817a0a6b2 Downloading [=========================================> ] 58.93MB/71.86MB 09:41:55 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 09:41:55 96e38c8865ba Extracting [=========================> ] 37.32MB/71.91MB 09:41:55 f90c8eb4724c Extracting [===========================> ] 17.04MB/30.59MB 09:41:55 1e017ebebdbd Downloading [==========> ] 7.912MB/37.19MB 09:41:55 55f2b468da67 Downloading [=> ] 10.27MB/257.9MB 09:41:55 dee817a0a6b2 Verifying Checksum 09:41:55 dee817a0a6b2 Download complete 09:41:55 82bfc142787e Downloading [> ] 97.22kB/8.613MB 09:41:56 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 09:41:56 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 09:41:56 f90c8eb4724c Extracting [=====================================> ] 22.94MB/30.59MB 09:41:56 1e017ebebdbd Downloading [=======================> ] 17.33MB/37.19MB 09:41:56 55f2b468da67 Downloading [====> ] 23.79MB/257.9MB 09:41:56 82bfc142787e Downloading [===============> ] 2.653MB/8.613MB 09:41:56 dee817a0a6b2 Extracting [> ] 557.1kB/71.86MB 09:41:56 96e38c8865ba Extracting [==============================> ] 44.56MB/71.91MB 09:41:56 96e38c8865ba Extracting [==============================> ] 44.56MB/71.91MB 09:41:56 1e017ebebdbd Downloading [========================================> ] 30.15MB/37.19MB 09:41:56 55f2b468da67 Downloading [======> ] 34.06MB/257.9MB 09:41:56 f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 09:41:56 82bfc142787e Downloading [==============================================> ] 8.06MB/8.613MB 09:41:56 82bfc142787e Verifying Checksum 09:41:56 82bfc142787e Download complete 09:41:56 dee817a0a6b2 Extracting [==> ] 3.899MB/71.86MB 09:41:56 1e017ebebdbd Verifying Checksum 09:41:56 1e017ebebdbd Download complete 09:41:56 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 09:41:56 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 09:41:56 46baca71a4ef Verifying Checksum 09:41:56 46baca71a4ef Download complete 09:41:56 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 09:41:56 c0c90eeb8aca Verifying Checksum 09:41:56 c0c90eeb8aca Download complete 09:41:56 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 09:41:56 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 09:41:56 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 09:41:56 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 09:41:56 5cfb27c10ea5 Verifying Checksum 09:41:56 5cfb27c10ea5 Download complete 09:41:56 55f2b468da67 Downloading [=========> ] 48.66MB/257.9MB 09:41:56 40a5eed61bb0 Download complete 09:41:56 f90c8eb4724c Extracting [==============================================> ] 28.18MB/30.59MB 09:41:56 e040ea11fa10 Downloading [==================================================>] 173B/173B 09:41:56 e040ea11fa10 Verifying Checksum 09:41:56 e040ea11fa10 Download complete 09:41:56 dee817a0a6b2 Extracting [====> ] 6.128MB/71.86MB 09:41:56 b0e0ef7895f4 Downloading [===> ] 2.637MB/37.01MB 09:41:56 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 09:41:56 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 09:41:56 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 09:41:56 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 09:41:56 55f2b468da67 Downloading [===========> ] 61.64MB/257.9MB 09:41:56 f90c8eb4724c Extracting [=================================================> ] 30.15MB/30.59MB 09:41:56 dee817a0a6b2 Extracting [======> ] 9.47MB/71.86MB 09:41:56 09d5a3f70313 Downloading [> ] 1.621MB/109.2MB 09:41:56 b0e0ef7895f4 Downloading [======> ] 4.898MB/37.01MB 09:41:56 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 09:41:56 55f2b468da67 Downloading [==============> ] 74.61MB/257.9MB 09:41:56 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 09:41:56 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 09:41:56 f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 09:41:56 dee817a0a6b2 Extracting [========> ] 12.26MB/71.86MB 09:41:56 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 09:41:56 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 09:41:56 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 09:41:56 dee817a0a6b2 Extracting [===========> ] 16.71MB/71.86MB 09:41:56 f90c8eb4724c Pull complete 09:41:56 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 09:41:56 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 09:41:57 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 09:41:57 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 09:41:58 96e38c8865ba Extracting [==========================================> ] 61.83MB/71.91MB 09:41:58 1e017ebebdbd Extracting [================> ] 12.19MB/37.19MB 09:41:58 55f2b468da67 Downloading [===============> ] 80.56MB/257.9MB 09:41:58 1e017ebebdbd Extracting [================> ] 12.58MB/37.19MB 09:41:58 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB 09:41:58 dee817a0a6b2 Extracting [==============> ] 21.17MB/71.86MB 09:41:58 dee817a0a6b2 Extracting [================> ] 23.4MB/71.86MB 09:41:58 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 09:41:58 b0e0ef7895f4 Downloading [========> ] 6.028MB/37.01MB 09:41:58 2b1b549e99de Extracting [============> ] 655.4kB/2.646MB 09:41:58 96e38c8865ba Extracting [==========================================> ] 61.83MB/71.91MB 09:41:58 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB 09:41:58 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB 09:41:58 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 09:41:58 55f2b468da67 Downloading [==================> ] 92.99MB/257.9MB 09:41:58 1e017ebebdbd Extracting [====================> ] 14.94MB/37.19MB 09:41:58 dee817a0a6b2 Extracting [==================> ] 27.3MB/71.86MB 09:41:58 b0e0ef7895f4 Downloading [=====================> ] 15.83MB/37.01MB 09:41:58 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 09:41:58 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 09:41:58 96e38c8865ba Extracting [=============================================> ] 65.18MB/71.91MB 09:41:58 2b1b549e99de Pull complete 09:41:58 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 09:41:58 55f2b468da67 Downloading [====================> ] 107.6MB/257.9MB 09:41:58 b0e0ef7895f4 Downloading [=======================================> ] 29.01MB/37.01MB 09:41:58 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 09:41:58 09d5a3f70313 Downloading [=======> ] 16.76MB/109.2MB 09:41:58 dee817a0a6b2 Extracting [=====================> ] 31.2MB/71.86MB 09:41:58 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 09:41:58 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 09:41:58 b0e0ef7895f4 Verifying Checksum 09:41:58 b0e0ef7895f4 Download complete 09:41:58 55f2b468da67 Downloading [=======================> ] 123.3MB/257.9MB 09:41:58 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 09:41:58 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 09:41:58 356f5c2c843b Verifying Checksum 09:41:58 356f5c2c843b Download complete 09:41:58 9fa9226be034 Downloading [> ] 15.3kB/783kB 09:41:58 09d5a3f70313 Downloading [=============> ] 29.2MB/109.2MB 09:41:58 1e017ebebdbd Extracting [===============================> ] 23.59MB/37.19MB 09:41:58 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB 09:41:58 9fa9226be034 Verifying Checksum 09:41:58 9fa9226be034 Download complete 09:41:58 9fa9226be034 Extracting [==> ] 32.77kB/783kB 09:41:58 dee817a0a6b2 Extracting [========================> ] 34.54MB/71.86MB 09:41:58 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 09:41:58 96e38c8865ba Extracting [=================================================> ] 71.3MB/71.91MB 09:41:58 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 09:41:58 96e38c8865ba Extracting [=================================================> ] 71.3MB/71.91MB 09:41:58 1617e25568b2 Verifying Checksum 09:41:58 1617e25568b2 Download complete 09:41:58 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 09:41:58 55f2b468da67 Downloading [==========================> ] 136.2MB/257.9MB 09:41:58 09d5a3f70313 Downloading [==================> ] 39.47MB/109.2MB 09:41:58 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 09:41:58 547372ea8ffa Extracting [============> ] 3.146MB/12.63MB 09:41:58 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 09:41:58 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 09:41:58 dee817a0a6b2 Extracting [=========================> ] 37.32MB/71.86MB 09:41:58 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 09:41:58 9fa9226be034 Extracting [==================================================>] 783kB/783kB 09:41:58 6ac0e4adf315 Downloading [=====> ] 6.487MB/62.07MB 09:41:58 55f2b468da67 Downloading [=============================> ] 151.9MB/257.9MB 09:41:58 09d5a3f70313 Downloading [========================> ] 54.07MB/109.2MB 09:41:58 547372ea8ffa Extracting [==========================> ] 6.685MB/12.63MB 09:41:58 1e017ebebdbd Extracting [======================================> ] 28.7MB/37.19MB 09:41:58 dee817a0a6b2 Extracting [===========================> ] 39.55MB/71.86MB 09:41:58 9fa9226be034 Pull complete 09:41:58 96e38c8865ba Pull complete 09:41:58 96e38c8865ba Pull complete 09:41:58 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 09:41:58 e5d7009d9e55 Extracting [==================================================>] 295B/295B 09:41:58 5e06c6bed798 Extracting [==================================================>] 296B/296B 09:41:58 5e06c6bed798 Extracting [==================================================>] 296B/296B 09:41:58 e5d7009d9e55 Extracting [==================================================>] 295B/295B 09:41:59 6ac0e4adf315 Downloading [==========> ] 13.52MB/62.07MB 09:41:59 55f2b468da67 Downloading [===============================> ] 163.8MB/257.9MB 09:41:59 09d5a3f70313 Downloading [==============================> ] 65.96MB/109.2MB 09:41:59 547372ea8ffa Extracting [================================> ] 8.126MB/12.63MB 09:41:59 1e017ebebdbd Extracting [=========================================> ] 31.06MB/37.19MB 09:41:59 dee817a0a6b2 Extracting [=============================> ] 41.78MB/71.86MB 09:41:59 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 09:41:59 6ac0e4adf315 Downloading [===================> ] 24.33MB/62.07MB 09:41:59 55f2b468da67 Downloading [==================================> ] 176.3MB/257.9MB 09:41:59 09d5a3f70313 Downloading [====================================> ] 80.56MB/109.2MB 09:41:59 547372ea8ffa Extracting [=========================================> ] 10.49MB/12.63MB 09:41:59 dee817a0a6b2 Extracting [==============================> ] 43.45MB/71.86MB 09:41:59 e5d7009d9e55 Pull complete 09:41:59 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 09:41:59 5e06c6bed798 Pull complete 09:41:59 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 09:41:59 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 09:41:59 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 09:41:59 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 09:41:59 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 09:41:59 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 09:41:59 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09:41:59 6ac0e4adf315 Downloading [=========================> ] 31.9MB/62.07MB 09:41:59 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 09:41:59 55f2b468da67 Downloading [===================================> ] 184.9MB/257.9MB 09:41:59 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 09:41:59 09d5a3f70313 Downloading [==========================================> ] 92.99MB/109.2MB 09:41:59 dee817a0a6b2 Extracting [===============================> ] 45.12MB/71.86MB 09:41:59 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 09:41:59 547372ea8ffa Pull complete 09:41:59 6ac0e4adf315 Downloading [=================================> ] 42.17MB/62.07MB 09:41:59 55f2b468da67 Downloading [=====================================> ] 194.1MB/257.9MB 09:41:59 09d5a3f70313 Downloading [================================================> ] 106MB/109.2MB 09:41:59 1617e25568b2 Pull complete 09:41:59 09d5a3f70313 Verifying Checksum 09:41:59 09d5a3f70313 Download complete 09:41:59 1ec5fb03eaee Pull complete 09:41:59 dee817a0a6b2 Extracting [=================================> ] 48.46MB/71.86MB 09:41:59 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 09:41:59 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 09:41:59 684be6598fc9 Pull complete 09:41:59 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 09:41:59 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 09:41:59 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 09:41:59 1e017ebebdbd Extracting [=================================================> ] 36.57MB/37.19MB 09:41:59 6ac0e4adf315 Downloading [===========================================> ] 54.07MB/62.07MB 09:41:59 55f2b468da67 Downloading [========================================> ] 206.5MB/257.9MB 09:41:59 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 09:41:59 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 09:41:59 6ac0e4adf315 Verifying Checksum 09:41:59 6ac0e4adf315 Download complete 09:41:59 408012a7b118 Download complete 09:41:59 f3b09c502777 Downloading [=====> ] 6.487MB/56.52MB 09:41:59 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 09:41:59 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 09:41:59 44986281b8b9 Download complete 09:41:59 dee817a0a6b2 Extracting [====================================> ] 51.81MB/71.86MB 09:41:59 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 09:41:59 bf70c5107ab5 Verifying Checksum 09:41:59 bf70c5107ab5 Download complete 09:41:59 1e017ebebdbd Pull complete 09:41:59 55f2b468da67 Downloading [==========================================> ] 217.9MB/257.9MB 09:41:59 65d25c0f02f3 Extracting [=======> ] 4.424MB/28.98MB 09:41:59 d3165a332ae3 Pull complete 09:41:59 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 09:41:59 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 09:41:59 1ccde423731d Verifying Checksum 09:41:59 1ccde423731d Download complete 09:41:59 7221d93db8a9 Downloading [==================================================>] 100B/100B 09:41:59 7221d93db8a9 Verifying Checksum 09:41:59 7221d93db8a9 Download complete 09:41:59 0d92cad902ba Pull complete 09:41:59 7df673c7455d Downloading [==================================================>] 694B/694B 09:41:59 7df673c7455d Verifying Checksum 09:41:59 7df673c7455d Download complete 09:41:59 eca0188f477e Downloading [> ] 375.7kB/37.17MB 09:41:59 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 09:41:59 f3b09c502777 Downloading [===========> ] 13.52MB/56.52MB 09:41:59 dee817a0a6b2 Extracting [=====================================> ] 54.03MB/71.86MB 09:41:59 55f2b468da67 Downloading [============================================> ] 231.4MB/257.9MB 09:41:59 65d25c0f02f3 Extracting [==============> ] 8.258MB/28.98MB 09:41:59 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 09:41:59 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 09:41:59 eca0188f477e Downloading [========> ] 6.405MB/37.17MB 09:41:59 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB 09:41:59 f3b09c502777 Downloading [====================> ] 23.25MB/56.52MB 09:41:59 dee817a0a6b2 Extracting [=======================================> ] 57.38MB/71.86MB 09:41:59 55f2b468da67 Downloading [==============================================> ] 242.2MB/257.9MB 09:41:59 c124ba1a8b26 Extracting [=====> ] 10.03MB/91.87MB 09:41:59 65d25c0f02f3 Extracting [==================> ] 10.91MB/28.98MB 09:41:59 eca0188f477e Downloading [=====================> ] 15.83MB/37.17MB 09:41:59 dcc0c3b2850c Extracting [====> ] 6.685MB/76.12MB 09:41:59 f3b09c502777 Downloading [============================> ] 31.9MB/56.52MB 09:41:59 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 09:41:59 dee817a0a6b2 Extracting [=========================================> ] 60.16MB/71.86MB 09:41:59 55f2b468da67 Downloading [=================================================> ] 253.6MB/257.9MB 09:41:59 c124ba1a8b26 Extracting [========> ] 16.15MB/91.87MB 09:41:59 65d25c0f02f3 Extracting [======================> ] 13.27MB/28.98MB 09:41:59 55f2b468da67 Verifying Checksum 09:41:59 55f2b468da67 Download complete 09:41:59 e444bcd4d577 Downloading [==================================================>] 279B/279B 09:41:59 e444bcd4d577 Verifying Checksum 09:41:59 e444bcd4d577 Download complete 09:41:59 eca0188f477e Downloading [===========================> ] 20.35MB/37.17MB 09:41:59 dcc0c3b2850c Extracting [========> ] 12.26MB/76.12MB 09:41:59 eabd8714fec9 Downloading [> ] 539.6kB/375MB 09:41:59 f3b09c502777 Downloading [==================================> ] 39.47MB/56.52MB 09:41:59 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 09:41:59 dee817a0a6b2 Extracting [===========================================> ] 62.39MB/71.86MB 09:41:59 c124ba1a8b26 Extracting [=============> ] 23.95MB/91.87MB 09:41:59 65d25c0f02f3 Extracting [============================> ] 16.52MB/28.98MB 09:42:00 eca0188f477e Downloading [==========================================> ] 31.28MB/37.17MB 09:42:00 dcc0c3b2850c Extracting [==========> ] 16.71MB/76.12MB 09:42:00 eabd8714fec9 Downloading [=> ] 7.568MB/375MB 09:42:00 6ac0e4adf315 Extracting [========> ] 10.58MB/62.07MB 09:42:00 f3b09c502777 Downloading [============================================> ] 49.74MB/56.52MB 09:42:00 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 09:42:00 c124ba1a8b26 Extracting [================> ] 30.64MB/91.87MB 09:42:00 dee817a0a6b2 Extracting [============================================> ] 64.62MB/71.86MB 09:42:00 eca0188f477e Verifying Checksum 09:42:00 eca0188f477e Download complete 09:42:00 f3b09c502777 Verifying Checksum 09:42:00 f3b09c502777 Download complete 09:42:00 65d25c0f02f3 Extracting [================================> ] 18.58MB/28.98MB 09:42:00 dcc0c3b2850c Extracting [===============> ] 22.84MB/76.12MB 09:42:00 6ac0e4adf315 Extracting [==========> ] 12.81MB/62.07MB 09:42:00 55f2b468da67 Extracting [=> ] 6.685MB/257.9MB 09:42:00 eabd8714fec9 Downloading [==> ] 16.22MB/375MB 09:42:00 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 09:42:00 45fd2fec8a19 Verifying Checksum 09:42:00 45fd2fec8a19 Download complete 09:42:00 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 09:42:00 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 09:42:00 dee817a0a6b2 Extracting [==============================================> ] 66.85MB/71.86MB 09:42:00 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 09:42:00 f963a77d2726 Verifying Checksum 09:42:00 c124ba1a8b26 Extracting [===================> ] 36.21MB/91.87MB 09:42:00 f963a77d2726 Download complete 09:42:00 65d25c0f02f3 Extracting [==============================================> ] 27.13MB/28.98MB 09:42:00 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 09:42:00 eabd8714fec9 Downloading [===> ] 26.49MB/375MB 09:42:00 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 09:42:00 dcc0c3b2850c Extracting [===================> ] 28.97MB/76.12MB 09:42:00 55f2b468da67 Extracting [==> ] 15.04MB/257.9MB 09:42:00 8f10199ed94b Downloading [===============> ] 2.751MB/8.768MB 09:42:00 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 09:42:00 c124ba1a8b26 Extracting [======================> ] 40.67MB/91.87MB 09:42:00 eca0188f477e Extracting [> ] 393.2kB/37.17MB 09:42:00 65d25c0f02f3 Pull complete 09:42:00 dee817a0a6b2 Extracting [================================================> ] 69.63MB/71.86MB 09:42:00 eabd8714fec9 Downloading [=====> ] 37.85MB/375MB 09:42:00 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB 09:42:00 dcc0c3b2850c Extracting [=======================> ] 35.09MB/76.12MB 09:42:00 8f10199ed94b Downloading [================================> ] 5.701MB/8.768MB 09:42:00 f3a82e9f1761 Downloading [==> ] 2.293MB/44.41MB 09:42:00 c124ba1a8b26 Extracting [========================> ] 45.68MB/91.87MB 09:42:00 6ac0e4adf315 Extracting [==============> ] 17.83MB/62.07MB 09:42:00 eca0188f477e Extracting [===> ] 2.753MB/37.17MB 09:42:00 90dd78f85976 Extracting [> ] 426kB/41.49MB 09:42:00 dee817a0a6b2 Extracting [==================================================>] 71.86MB/71.86MB 09:42:00 eabd8714fec9 Downloading [======> ] 50.28MB/375MB 09:42:00 dee817a0a6b2 Extracting [==================================================>] 71.86MB/71.86MB 09:42:00 8f10199ed94b Downloading [=================================================> ] 8.65MB/8.768MB 09:42:00 8f10199ed94b Verifying Checksum 09:42:00 8f10199ed94b Download complete 09:42:00 dcc0c3b2850c Extracting [===========================> ] 41.22MB/76.12MB 09:42:00 55f2b468da67 Extracting [====> ] 22.28MB/257.9MB 09:42:00 f3a82e9f1761 Downloading [====> ] 4.128MB/44.41MB 09:42:00 c124ba1a8b26 Extracting [===========================> ] 50.14MB/91.87MB 09:42:00 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 09:42:00 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 09:42:00 79161a3f5362 Verifying Checksum 09:42:00 79161a3f5362 Download complete 09:42:00 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 09:42:00 9c266ba63f51 Verifying Checksum 09:42:00 9c266ba63f51 Download complete 09:42:00 eca0188f477e Extracting [======> ] 4.719MB/37.17MB 09:42:00 90dd78f85976 Extracting [===> ] 2.982MB/41.49MB 09:42:00 6ac0e4adf315 Extracting [==================> ] 22.84MB/62.07MB 09:42:00 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 09:42:00 2e8a7df9c2ee Download complete 09:42:00 eabd8714fec9 Downloading [========> ] 60.01MB/375MB 09:42:00 10f05dd8b1db Downloading [==================================================>] 98B/98B 09:42:00 10f05dd8b1db Verifying Checksum 09:42:00 10f05dd8b1db Download complete 09:42:00 dcc0c3b2850c Extracting [===============================> ] 47.91MB/76.12MB 09:42:00 f3a82e9f1761 Downloading [=======> ] 6.88MB/44.41MB 09:42:00 41dac8b43ba6 Download complete 09:42:00 c124ba1a8b26 Extracting [==============================> ] 56.82MB/91.87MB 09:42:00 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 09:42:00 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 09:42:00 71a9f6a9ab4d Verifying Checksum 09:42:00 71a9f6a9ab4d Download complete 09:42:00 90dd78f85976 Extracting [======> ] 5.112MB/41.49MB 09:42:00 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 09:42:00 eabd8714fec9 Downloading [=========> ] 72.45MB/375MB 09:42:00 dee817a0a6b2 Pull complete 09:42:00 dcc0c3b2850c Extracting [===================================> ] 54.03MB/76.12MB 09:42:00 eca0188f477e Extracting [========> ] 6.685MB/37.17MB 09:42:00 f3a82e9f1761 Downloading [===========> ] 10.09MB/44.41MB 09:42:00 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 09:42:00 c124ba1a8b26 Extracting [==================================> ] 64.06MB/91.87MB 09:42:00 c2847a4f1b5b Extracting [> ] 163.8kB/14.64MB 09:42:00 90dd78f85976 Extracting [========> ] 7.242MB/41.49MB 09:42:00 eabd8714fec9 Downloading [===========> ] 83.26MB/375MB 09:42:00 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 09:42:00 6ac0e4adf315 Extracting [====================> ] 25.62MB/62.07MB 09:42:00 dcc0c3b2850c Extracting [=====================================> ] 57.38MB/76.12MB 09:42:00 eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 09:42:00 f3a82e9f1761 Downloading [===============> ] 13.76MB/44.41MB 09:42:00 c2847a4f1b5b Extracting [=> ] 327.7kB/14.64MB 09:42:00 c124ba1a8b26 Extracting [=====================================> ] 69.63MB/91.87MB 09:42:00 da3ed5db7103 Downloading [> ] 2.162MB/127.4MB 09:42:00 eabd8714fec9 Downloading [============> ] 92.99MB/375MB 09:42:00 90dd78f85976 Extracting [============> ] 10.22MB/41.49MB 09:42:00 55f2b468da67 Extracting [======> ] 31.2MB/257.9MB 09:42:00 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB 09:42:00 dcc0c3b2850c Extracting [=========================================> ] 63.5MB/76.12MB 09:42:00 eca0188f477e Extracting [================> ] 12.58MB/37.17MB 09:42:00 f3a82e9f1761 Downloading [===================> ] 17.43MB/44.41MB 09:42:00 c2847a4f1b5b Extracting [==========> ] 3.113MB/14.64MB 09:42:00 c124ba1a8b26 Extracting [=========================================> ] 75.76MB/91.87MB 09:42:00 da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB 09:42:00 eabd8714fec9 Downloading [=============> ] 101.6MB/375MB 09:42:00 90dd78f85976 Extracting [=============> ] 11.5MB/41.49MB 09:42:00 55f2b468da67 Extracting [=======> ] 38.44MB/257.9MB 09:42:00 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 09:42:00 dcc0c3b2850c Extracting [=============================================> ] 69.07MB/76.12MB 09:42:01 f3a82e9f1761 Downloading [=======================> ] 21.1MB/44.41MB 09:42:01 eca0188f477e Extracting [====================> ] 15.34MB/37.17MB 09:42:01 c124ba1a8b26 Extracting [============================================> ] 81.33MB/91.87MB 09:42:01 c2847a4f1b5b Extracting [=================> ] 5.243MB/14.64MB 09:42:01 eabd8714fec9 Downloading [==============> ] 112.5MB/375MB 09:42:01 da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 09:42:01 90dd78f85976 Extracting [================> ] 13.63MB/41.49MB 09:42:01 55f2b468da67 Extracting [========> ] 44.56MB/257.9MB 09:42:01 6ac0e4adf315 Extracting [===========================> ] 33.98MB/62.07MB 09:42:01 dcc0c3b2850c Extracting [===============================================> ] 72.42MB/76.12MB 09:42:01 f3a82e9f1761 Downloading [============================> ] 25.23MB/44.41MB 09:42:01 eca0188f477e Extracting [======================> ] 16.91MB/37.17MB 09:42:01 c124ba1a8b26 Extracting [===============================================> ] 88.01MB/91.87MB 09:42:01 c2847a4f1b5b Extracting [=====================> ] 6.39MB/14.64MB 09:42:01 eabd8714fec9 Downloading [================> ] 120MB/375MB 09:42:01 da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB 09:42:01 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 09:42:01 55f2b468da67 Extracting [=========> ] 50.14MB/257.9MB 09:42:01 90dd78f85976 Extracting [====================> ] 16.61MB/41.49MB 09:42:01 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 09:42:01 6ac0e4adf315 Extracting [=================================> ] 41.22MB/62.07MB 09:42:01 f3a82e9f1761 Downloading [======================================> ] 33.95MB/44.41MB 09:42:01 eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB 09:42:01 c2847a4f1b5b Extracting [==========================> ] 7.7MB/14.64MB 09:42:01 dcc0c3b2850c Pull complete 09:42:01 c124ba1a8b26 Pull complete 09:42:01 eabd8714fec9 Downloading [=================> ] 129.8MB/375MB 09:42:01 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 09:42:01 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 09:42:01 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 09:42:01 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 09:42:01 da3ed5db7103 Downloading [======> ] 16.76MB/127.4MB 09:42:01 55f2b468da67 Extracting [===========> ] 57.93MB/257.9MB 09:42:01 90dd78f85976 Extracting [======================> ] 18.32MB/41.49MB 09:42:01 6ac0e4adf315 Extracting [=======================================> ] 48.46MB/62.07MB 09:42:01 f3a82e9f1761 Downloading [==============================================> ] 41.29MB/44.41MB 09:42:01 eca0188f477e Extracting [=============================> ] 21.63MB/37.17MB 09:42:01 eabd8714fec9 Downloading [==================> ] 139.5MB/375MB 09:42:01 f3a82e9f1761 Verifying Checksum 09:42:01 f3a82e9f1761 Download complete 09:42:01 da3ed5db7103 Downloading [==========> ] 25.95MB/127.4MB 09:42:01 c2847a4f1b5b Extracting [=============================> ] 8.52MB/14.64MB 09:42:01 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 09:42:01 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 09:42:01 c955f6e31a04 Verifying Checksum 09:42:01 c955f6e31a04 Download complete 09:42:01 55f2b468da67 Extracting [============> ] 62.95MB/257.9MB 09:42:01 90dd78f85976 Extracting [===========================> ] 22.58MB/41.49MB 09:42:01 6ac0e4adf315 Extracting [===========================================> ] 54.59MB/62.07MB 09:42:01 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 09:42:01 eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB 09:42:01 eabd8714fec9 Downloading [====================> ] 150.8MB/375MB 09:42:01 6394804c2196 Pull complete 09:42:01 eb7cda286a15 Pull complete 09:42:01 da3ed5db7103 Downloading [============> ] 32.98MB/127.4MB 09:42:01 55f2b468da67 Extracting [============> ] 66.85MB/257.9MB 09:42:01 c2847a4f1b5b Extracting [=====================================> ] 10.98MB/14.64MB 09:42:01 pap Pulled 09:42:01 api Pulled 09:42:01 f18232174bc9 Downloading [=========================================> ] 3.046MB/3.642MB 09:42:01 90dd78f85976 Extracting [====================================> ] 30.24MB/41.49MB 09:42:01 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB 09:42:01 f18232174bc9 Verifying Checksum 09:42:01 f18232174bc9 Download complete 09:42:01 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 09:42:01 eabd8714fec9 Downloading [=====================> ] 161.7MB/375MB 09:42:01 9183b65e90ee Downloading [==================================================>] 141B/141B 09:42:01 9183b65e90ee Verifying Checksum 09:42:01 9183b65e90ee Download complete 09:42:01 eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB 09:42:01 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB 09:42:01 da3ed5db7103 Downloading [================> ] 43.25MB/127.4MB 09:42:01 55f2b468da67 Extracting [==============> ] 74.65MB/257.9MB 09:42:01 c2847a4f1b5b Extracting [========================================> ] 11.96MB/14.64MB 09:42:01 90dd78f85976 Extracting [========================================> ] 33.65MB/41.49MB 09:42:01 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 09:42:01 eabd8714fec9 Downloading [======================> ] 170.3MB/375MB 09:42:01 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 09:42:01 3f8d5c908dcc Verifying Checksum 09:42:01 3f8d5c908dcc Download complete 09:42:01 eca0188f477e Extracting [======================================> ] 28.7MB/37.17MB 09:42:01 c2847a4f1b5b Extracting [==================================================>] 14.64MB/14.64MB 09:42:01 da3ed5db7103 Downloading [====================> ] 52.98MB/127.4MB 09:42:01 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 09:42:01 55f2b468da67 Extracting [===============> ] 80.22MB/257.9MB 09:42:01 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 09:42:01 6ac0e4adf315 Pull complete 09:42:01 90dd78f85976 Extracting [============================================> ] 36.63MB/41.49MB 09:42:01 c2847a4f1b5b Pull complete 09:42:01 a1585586470a Extracting [==================================================>] 1.073kB/1.073kB 09:42:01 a1585586470a Extracting [==================================================>] 1.073kB/1.073kB 09:42:01 eabd8714fec9 Downloading [========================> ] 183.8MB/375MB 09:42:01 eca0188f477e Extracting [==========================================> ] 31.85MB/37.17MB 09:42:01 30bb92ff0608 Downloading [==================================> ] 6.094MB/8.735MB 09:42:01 da3ed5db7103 Downloading [=========================> ] 64.34MB/127.4MB 09:42:01 55f2b468da67 Extracting [================> ] 84.67MB/257.9MB 09:42:01 f18232174bc9 Extracting [========================> ] 1.769MB/3.642MB 09:42:01 30bb92ff0608 Verifying Checksum 09:42:01 30bb92ff0608 Download complete 09:42:01 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 09:42:01 807a2e881ecd Downloading [==================================================>] 58.07kB/58.07kB 09:42:01 807a2e881ecd Verifying Checksum 09:42:01 807a2e881ecd Download complete 09:42:01 90dd78f85976 Extracting [==============================================> ] 38.34MB/41.49MB 09:42:01 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 09:42:01 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 09:42:01 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 09:42:01 4a4d0948b0bf Verifying Checksum 09:42:01 4a4d0948b0bf Download complete 09:42:01 eabd8714fec9 Downloading [==========================> ] 196.8MB/375MB 09:42:01 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 09:42:01 a1585586470a Pull complete 09:42:01 6360c234d368 Extracting [==================================================>] 5.24kB/5.24kB 09:42:01 6360c234d368 Extracting [==================================================>] 5.24kB/5.24kB 09:42:01 da3ed5db7103 Downloading [=============================> ] 74.07MB/127.4MB 09:42:01 eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 09:42:01 04f6155c873d Downloading [> ] 539.6kB/107.3MB 09:42:01 55f2b468da67 Extracting [=================> ] 91.36MB/257.9MB 09:42:01 f18232174bc9 Pull complete 09:42:01 9183b65e90ee Extracting [==================================================>] 141B/141B 09:42:01 90dd78f85976 Extracting [=================================================> ] 41.32MB/41.49MB 09:42:01 eabd8714fec9 Downloading [===========================> ] 208.2MB/375MB 09:42:01 9183b65e90ee Extracting [==================================================>] 141B/141B 09:42:01 f3b09c502777 Extracting [==> ] 3.342MB/56.52MB 09:42:02 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 09:42:02 da3ed5db7103 Downloading [================================> ] 83.26MB/127.4MB 09:42:02 04f6155c873d Downloading [===> ] 7.568MB/107.3MB 09:42:02 55f2b468da67 Extracting [==================> ] 96.37MB/257.9MB 09:42:02 6360c234d368 Pull complete 09:42:02 eca0188f477e Extracting [===============================================> ] 35MB/37.17MB 09:42:02 6216bab4c089 Extracting [==================================================>] 1.034kB/1.034kB 09:42:02 6216bab4c089 Extracting [==================================================>] 1.034kB/1.034kB 09:42:02 90dd78f85976 Pull complete 09:42:02 4f4fb700ef54 Extracting [==================================================>] 32B/32B 09:42:02 4f4fb700ef54 Extracting [==================================================>] 32B/32B 09:42:02 eabd8714fec9 Downloading [=============================> ] 221.7MB/375MB 09:42:02 da3ed5db7103 Downloading [======================================> ] 97.32MB/127.4MB 09:42:02 f3b09c502777 Extracting [====> ] 5.571MB/56.52MB 09:42:02 04f6155c873d Downloading [=======> ] 16.22MB/107.3MB 09:42:02 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB 09:42:02 eca0188f477e Extracting [=================================================> ] 36.57MB/37.17MB 09:42:02 9183b65e90ee Pull complete 09:42:02 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 09:42:02 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 09:42:02 eabd8714fec9 Downloading [===============================> ] 236.8MB/375MB 09:42:02 6216bab4c089 Pull complete 09:42:02 da3ed5db7103 Downloading [===========================================> ] 109.8MB/127.4MB 09:42:02 d3f17694db5b Extracting [==================================================>] 1.034kB/1.034kB 09:42:02 d3f17694db5b Extracting [==================================================>] 1.034kB/1.034kB 09:42:02 04f6155c873d Downloading [============> ] 27.03MB/107.3MB 09:42:02 eca0188f477e Pull complete 09:42:02 55f2b468da67 Extracting [====================> ] 106.4MB/257.9MB 09:42:02 e444bcd4d577 Extracting [==================================================>] 279B/279B 09:42:02 e444bcd4d577 Extracting [==================================================>] 279B/279B 09:42:02 f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 09:42:02 4f4fb700ef54 Pull complete 09:42:02 eabd8714fec9 Downloading [=================================> ] 248.2MB/375MB 09:42:02 3f8d5c908dcc Extracting [====> ] 327.7kB/3.524MB 09:42:02 opa-pdp Pulled 09:42:02 da3ed5db7103 Downloading [===============================================> ] 121.1MB/127.4MB 09:42:02 04f6155c873d Downloading [================> ] 35.14MB/107.3MB 09:42:02 55f2b468da67 Extracting [=====================> ] 109.2MB/257.9MB 09:42:02 f3b09c502777 Extracting [========> ] 10.03MB/56.52MB 09:42:02 da3ed5db7103 Verifying Checksum 09:42:02 da3ed5db7103 Download complete 09:42:02 eabd8714fec9 Downloading [==================================> ] 260.1MB/375MB 09:42:02 3f8d5c908dcc Extracting [==========================================> ] 3.015MB/3.524MB 09:42:02 d3f17694db5b Pull complete 09:42:02 e444bcd4d577 Pull complete 09:42:02 0174546cf409 Extracting [==================================================>] 19.52kB/19.52kB 09:42:02 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 09:42:02 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 09:42:02 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 09:42:02 04f6155c873d Downloading [====================> ] 44.33MB/107.3MB 09:42:02 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 09:42:02 f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB 09:42:02 eabd8714fec9 Downloading [====================================> ] 274.1MB/375MB 09:42:02 04f6155c873d Downloading [===========================> ] 58.93MB/107.3MB 09:42:02 85dde7dceb0a Downloading [==> ] 3.784MB/63.48MB 09:42:02 3f8d5c908dcc Pull complete 09:42:02 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 09:42:02 55f2b468da67 Extracting [======================> ] 114.8MB/257.9MB 09:42:02 f3b09c502777 Extracting [============> ] 14.48MB/56.52MB 09:42:02 eabd8714fec9 Downloading [======================================> ] 287.1MB/375MB 09:42:02 0174546cf409 Pull complete 09:42:02 policy-db-migrator Pulled 09:42:02 04f6155c873d Downloading [=================================> ] 71.37MB/107.3MB 09:42:02 85dde7dceb0a Downloading [=====> ] 7.568MB/63.48MB 09:42:02 30bb92ff0608 Extracting [===> ] 688.1kB/8.735MB 09:42:02 55f2b468da67 Extracting [======================> ] 117.5MB/257.9MB 09:42:02 f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB 09:42:02 eabd8714fec9 Downloading [=======================================> ] 298.4MB/375MB 09:42:02 04f6155c873d Downloading [======================================> ] 82.72MB/107.3MB 09:42:02 85dde7dceb0a Downloading [============> ] 15.68MB/63.48MB 09:42:02 30bb92ff0608 Extracting [========================> ] 4.227MB/8.735MB 09:42:02 55f2b468da67 Extracting [=======================> ] 123.1MB/257.9MB 09:42:02 eabd8714fec9 Downloading [=========================================> ] 312.5MB/375MB 09:42:02 04f6155c873d Downloading [============================================> ] 96.24MB/107.3MB 09:42:02 f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 09:42:02 85dde7dceb0a Downloading [======================> ] 28.11MB/63.48MB 09:42:02 30bb92ff0608 Extracting [========================================> ] 7.078MB/8.735MB 09:42:02 55f2b468da67 Extracting [=========================> ] 129.2MB/257.9MB 09:42:02 eabd8714fec9 Downloading [===========================================> ] 327.1MB/375MB 09:42:02 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB 09:42:02 04f6155c873d Verifying Checksum 09:42:02 04f6155c873d Download complete 09:42:02 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 09:42:02 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 09:42:02 7009d5001b77 Verifying Checksum 09:42:02 7009d5001b77 Download complete 09:42:02 85dde7dceb0a Downloading [===============================> ] 40.01MB/63.48MB 09:42:02 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 09:42:02 538deb30e80c Verifying Checksum 09:42:02 538deb30e80c Download complete 09:42:02 30bb92ff0608 Pull complete 09:42:03 f3b09c502777 Extracting [=====================> ] 24.51MB/56.52MB 09:42:03 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 09:42:03 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 09:42:03 55f2b468da67 Extracting [=========================> ] 133.7MB/257.9MB 09:42:03 eabd8714fec9 Downloading [=============================================> ] 337.9MB/375MB 09:42:03 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 09:42:03 85dde7dceb0a Downloading [========================================> ] 51.36MB/63.48MB 09:42:03 55f2b468da67 Extracting [==========================> ] 137.6MB/257.9MB 09:42:03 f3b09c502777 Extracting [========================> ] 27.3MB/56.52MB 09:42:03 eabd8714fec9 Downloading [===============================================> ] 354.7MB/375MB 09:42:03 2d429b9e73a6 Downloading [=====> ] 3.243MB/29.13MB 09:42:03 807a2e881ecd Pull complete 09:42:03 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 09:42:03 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 09:42:03 85dde7dceb0a Verifying Checksum 09:42:03 85dde7dceb0a Download complete 09:42:03 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 09:42:03 46eab5b44a35 Verifying Checksum 09:42:03 46eab5b44a35 Download complete 09:42:03 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 09:42:03 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB 09:42:03 f3b09c502777 Extracting [==============================> ] 34.54MB/56.52MB 09:42:03 eabd8714fec9 Downloading [=================================================> ] 368.2MB/375MB 09:42:03 2d429b9e73a6 Downloading [============> ] 7.077MB/29.13MB 09:42:03 4a4d0948b0bf Pull complete 09:42:03 eabd8714fec9 Verifying Checksum 09:42:03 eabd8714fec9 Download complete 09:42:03 55f2b468da67 Extracting [============================> ] 144.8MB/257.9MB 09:42:03 c4d302cc468d Downloading [========> ] 785.3kB/4.534MB 09:42:03 f3b09c502777 Extracting [=========================================> ] 46.79MB/56.52MB 09:42:03 2d429b9e73a6 Downloading [===============> ] 8.846MB/29.13MB 09:42:03 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 09:42:06 01e0882c90d9 Verifying Checksum 09:42:06 01e0882c90d9 Download complete 09:42:06 04f6155c873d Extracting [> ] 557.1kB/107.3MB 09:42:06 55f2b468da67 Extracting [============================> ] 148.2MB/257.9MB 09:42:06 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 09:42:06 c4d302cc468d Downloading [==============================> ] 2.801MB/4.534MB 09:42:06 f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 09:42:06 2d429b9e73a6 Downloading [======================> ] 12.98MB/29.13MB 09:42:06 eabd8714fec9 Extracting [> ] 557.1kB/375MB 09:42:06 c4d302cc468d Verifying Checksum 09:42:06 c4d302cc468d Download complete 09:42:06 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 09:42:06 04f6155c873d Extracting [=> ] 2.785MB/107.3MB 09:42:06 531ee2cf3c0c Downloading [================> ] 2.62MB/8.066MB 09:42:06 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 09:42:06 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 09:42:06 2d429b9e73a6 Downloading [=================================> ] 19.76MB/29.13MB 09:42:06 ed54a7dee1d8 Verifying Checksum 09:42:06 ed54a7dee1d8 Download complete 09:42:06 12c5c803443f Downloading [==================================================>] 116B/116B 09:42:06 12c5c803443f Verifying Checksum 09:42:06 12c5c803443f Download complete 09:42:06 eabd8714fec9 Extracting [=> ] 13.93MB/375MB 09:42:06 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 09:42:06 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 09:42:06 e27c75a98748 Verifying Checksum 09:42:06 e27c75a98748 Download complete 09:42:06 f3b09c502777 Pull complete 09:42:06 531ee2cf3c0c Downloading [============================================> ] 7.208MB/8.066MB 09:42:06 55f2b468da67 Extracting [=============================> ] 152.6MB/257.9MB 09:42:06 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 09:42:06 408012a7b118 Extracting [==================================================>] 637B/637B 09:42:06 408012a7b118 Extracting [==================================================>] 637B/637B 09:42:06 04f6155c873d Extracting [==> ] 5.014MB/107.3MB 09:42:06 531ee2cf3c0c Verifying Checksum 09:42:06 531ee2cf3c0c Download complete 09:42:06 2d429b9e73a6 Downloading [============================================> ] 25.95MB/29.13MB 09:42:06 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 09:42:06 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 09:42:06 a83b68436f09 Download complete 09:42:06 eabd8714fec9 Extracting [==> ] 17.27MB/375MB 09:42:06 787d6bee9571 Downloading [==================================================>] 127B/127B 09:42:06 787d6bee9571 Verifying Checksum 09:42:06 787d6bee9571 Download complete 09:42:06 2d429b9e73a6 Download complete 09:42:06 13ff0988aaea Downloading [==================================================>] 167B/167B 09:42:06 13ff0988aaea Verifying Checksum 09:42:06 13ff0988aaea Download complete 09:42:06 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 09:42:06 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 09:42:06 4b82842ab819 Verifying Checksum 09:42:06 4b82842ab819 Download complete 09:42:06 7e568a0dc8fb Downloading [==================================================>] 184B/184B 09:42:06 7e568a0dc8fb Verifying Checksum 09:42:06 7e568a0dc8fb Download complete 09:42:06 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB 09:42:06 e73cb4a42719 Downloading [==> ] 5.946MB/109.1MB 09:42:06 04f6155c873d Extracting [===> ] 7.799MB/107.3MB 09:42:06 eabd8714fec9 Extracting [==> ] 22.28MB/375MB 09:42:06 408012a7b118 Pull complete 09:42:06 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 09:42:06 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 09:42:06 e73cb4a42719 Downloading [======> ] 13.52MB/109.1MB 09:42:06 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 09:42:06 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 09:42:06 04f6155c873d Extracting [====> ] 10.03MB/107.3MB 09:42:06 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 09:42:06 e73cb4a42719 Downloading [==========> ] 22.17MB/109.1MB 09:42:06 2d429b9e73a6 Extracting [======> ] 3.834MB/29.13MB 09:42:06 55f2b468da67 Extracting [===============================> ] 162.1MB/257.9MB 09:42:06 44986281b8b9 Pull complete 09:42:06 eabd8714fec9 Extracting [===> ] 29.52MB/375MB 09:42:06 04f6155c873d Extracting [======> ] 13.37MB/107.3MB 09:42:06 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 09:42:06 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 09:42:06 e73cb4a42719 Downloading [============> ] 27.57MB/109.1MB 09:42:06 2d429b9e73a6 Extracting [=========> ] 5.603MB/29.13MB 09:42:06 55f2b468da67 Extracting [===============================> ] 164.3MB/257.9MB 09:42:06 eabd8714fec9 Extracting [====> ] 37.32MB/375MB 09:42:06 04f6155c873d Extracting [=======> ] 15.6MB/107.3MB 09:42:06 e73cb4a42719 Downloading [=================> ] 38.39MB/109.1MB 09:42:06 2d429b9e73a6 Extracting [==============> ] 8.258MB/29.13MB 09:42:06 55f2b468da67 Extracting [================================> ] 167.7MB/257.9MB 09:42:06 eabd8714fec9 Extracting [======> ] 45.68MB/375MB 09:42:06 e73cb4a42719 Downloading [======================> ] 49.74MB/109.1MB 09:42:06 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB 09:42:06 04f6155c873d Extracting [=======> ] 16.71MB/107.3MB 09:42:06 eabd8714fec9 Extracting [======> ] 50.69MB/375MB 09:42:06 e73cb4a42719 Downloading [==========================> ] 57.85MB/109.1MB 09:42:06 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 09:42:06 2d429b9e73a6 Extracting [======================> ] 12.98MB/29.13MB 09:42:06 eabd8714fec9 Extracting [=======> ] 56.82MB/375MB 09:42:06 e73cb4a42719 Downloading [================================> ] 70.83MB/109.1MB 09:42:06 04f6155c873d Extracting [========> ] 17.83MB/107.3MB 09:42:06 2d429b9e73a6 Extracting [=============================> ] 17.1MB/29.13MB 09:42:06 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 09:42:06 eabd8714fec9 Extracting [========> ] 65.18MB/375MB 09:42:06 e73cb4a42719 Downloading [=======================================> ] 85.97MB/109.1MB 09:42:06 04f6155c873d Extracting [=========> ] 19.5MB/107.3MB 09:42:06 2d429b9e73a6 Extracting [=====================================> ] 22.12MB/29.13MB 09:42:06 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 09:42:06 eabd8714fec9 Extracting [=========> ] 73.53MB/375MB 09:42:06 e73cb4a42719 Downloading [============================================> ] 97.32MB/109.1MB 09:42:06 04f6155c873d Extracting [==========> ] 22.84MB/107.3MB 09:42:06 eabd8714fec9 Extracting [==========> ] 82.44MB/375MB 09:42:10 e73cb4a42719 Verifying Checksum 09:42:10 e73cb4a42719 Download complete 09:42:10 04f6155c873d Extracting [===========> ] 25.62MB/107.3MB 09:42:10 bf70c5107ab5 Pull complete 09:42:10 eabd8714fec9 Extracting [===========> ] 85.23MB/375MB 09:42:10 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 09:42:10 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 09:42:10 04f6155c873d Extracting [=============> ] 28.41MB/107.3MB 09:42:10 eabd8714fec9 Extracting [============> ] 91.36MB/375MB 09:42:10 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 09:42:10 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 09:42:10 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 09:42:10 eabd8714fec9 Extracting [============> ] 95.26MB/375MB 09:42:10 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 09:42:10 04f6155c873d Extracting [===============> ] 33.42MB/107.3MB 09:42:10 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 09:42:10 04f6155c873d Extracting [=================> ] 37.32MB/107.3MB 09:42:10 eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 09:42:10 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB 09:42:10 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 09:42:10 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB 09:42:10 55f2b468da67 Extracting [====================================> ] 187.2MB/257.9MB 09:42:10 55f2b468da67 Extracting [=====================================> ] 191.6MB/257.9MB 09:42:10 55f2b468da67 Extracting [=====================================> ] 194.4MB/257.9MB 09:42:10 2d429b9e73a6 Extracting [=================================================> ] 28.9MB/29.13MB 09:42:10 eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 09:42:10 1ccde423731d Pull complete 09:42:10 04f6155c873d Extracting [==================> ] 39.55MB/107.3MB 09:42:10 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 09:42:10 eabd8714fec9 Extracting [==============> ] 105.3MB/375MB 09:42:10 7221d93db8a9 Extracting [==================================================>] 100B/100B 09:42:10 7221d93db8a9 Extracting [==================================================>] 100B/100B 09:42:10 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 09:42:10 04f6155c873d Extracting [==================> ] 40.67MB/107.3MB 09:42:10 eabd8714fec9 Extracting [==============> ] 107.5MB/375MB 09:42:10 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 09:42:10 04f6155c873d Extracting [====================> ] 44.01MB/107.3MB 09:42:10 eabd8714fec9 Extracting [==============> ] 110.9MB/375MB 09:42:10 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 09:42:10 04f6155c873d Extracting [======================> ] 47.91MB/107.3MB 09:42:10 eabd8714fec9 Extracting [===============> ] 114.2MB/375MB 09:42:10 55f2b468da67 Extracting [======================================> ] 197.8MB/257.9MB 09:42:10 eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 09:42:10 04f6155c873d Extracting [========================> ] 51.81MB/107.3MB 09:42:10 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 09:42:10 eabd8714fec9 Extracting [================> ] 124.2MB/375MB 09:42:10 04f6155c873d Extracting [=========================> ] 55.15MB/107.3MB 09:42:10 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 09:42:10 eabd8714fec9 Extracting [=================> ] 129.2MB/375MB 09:42:10 04f6155c873d Extracting [===========================> ] 59.05MB/107.3MB 09:42:10 55f2b468da67 Extracting [=======================================> ] 203.9MB/257.9MB 09:42:10 eabd8714fec9 Extracting [=================> ] 133.7MB/375MB 09:42:10 04f6155c873d Extracting [=============================> ] 63.5MB/107.3MB 09:42:10 eabd8714fec9 Extracting [==================> ] 135.4MB/375MB 09:42:10 04f6155c873d Extracting [=============================> ] 64.06MB/107.3MB 09:42:10 2d429b9e73a6 Pull complete 09:42:10 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 09:42:10 7221d93db8a9 Pull complete 09:42:10 04f6155c873d Extracting [==============================> ] 66.29MB/107.3MB 09:42:10 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 09:42:10 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 09:42:10 04f6155c873d Extracting [===============================> ] 68.52MB/107.3MB 09:42:10 eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 09:42:10 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 09:42:10 04f6155c873d Extracting [=================================> ] 71.86MB/107.3MB 09:42:10 eabd8714fec9 Extracting [===================> ] 144.8MB/375MB 09:42:10 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 09:42:10 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 09:42:10 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 09:42:10 04f6155c873d Extracting [==================================> ] 74.09MB/107.3MB 09:42:10 eabd8714fec9 Extracting [===================> ] 147.1MB/375MB 09:42:10 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 09:42:10 04f6155c873d Extracting [====================================> ] 77.99MB/107.3MB 09:42:10 eabd8714fec9 Extracting [===================> ] 149.8MB/375MB 09:42:10 7df673c7455d Extracting [==================================================>] 694B/694B 09:42:10 7df673c7455d Extracting [==================================================>] 694B/694B 09:42:10 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB 09:42:10 04f6155c873d Extracting [======================================> ] 81.89MB/107.3MB 09:42:10 eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 09:42:10 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB 09:42:10 04f6155c873d Extracting [=======================================> ] 84.12MB/107.3MB 09:42:10 eabd8714fec9 Extracting [====================> ] 154.9MB/375MB 09:42:10 55f2b468da67 Extracting [==========================================> ] 218.9MB/257.9MB 09:42:10 04f6155c873d Extracting [=========================================> ] 89.13MB/107.3MB 09:42:10 eabd8714fec9 Extracting [=====================> ] 157.6MB/375MB 09:42:10 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB 09:42:10 04f6155c873d Extracting [============================================> ] 96.37MB/107.3MB 09:42:10 eabd8714fec9 Extracting [=====================> ] 161MB/375MB 09:42:10 04f6155c873d Extracting [==============================================> ] 99.16MB/107.3MB 09:42:10 46eab5b44a35 Pull complete 09:42:10 55f2b468da67 Extracting [===========================================> ] 224.5MB/257.9MB 09:42:10 eabd8714fec9 Extracting [=====================> ] 163.2MB/375MB 09:42:10 04f6155c873d Extracting [===============================================> ] 100.8MB/107.3MB 09:42:10 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 09:42:10 eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 09:42:10 04f6155c873d Extracting [================================================> ] 103.1MB/107.3MB 09:42:10 eabd8714fec9 Extracting [=======================> ] 173.8MB/375MB 09:42:10 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 09:42:10 04f6155c873d Extracting [================================================> ] 104.2MB/107.3MB 09:42:10 eabd8714fec9 Extracting [========================> ] 183.3MB/375MB 09:42:10 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 09:42:10 04f6155c873d Extracting [=================================================> ] 105.3MB/107.3MB 09:42:10 eabd8714fec9 Extracting [=========================> ] 193.9MB/375MB 09:42:10 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 09:42:10 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 09:42:10 7df673c7455d Pull complete 09:42:10 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 09:42:10 eabd8714fec9 Extracting [===========================> ] 202.8MB/375MB 09:42:11 55f2b468da67 Extracting [=============================================> ] 235.1MB/257.9MB 09:42:11 c4d302cc468d Extracting [====> ] 393.2kB/4.534MB 09:42:11 eabd8714fec9 Extracting [===========================> ] 209.5MB/375MB 09:42:11 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB 09:42:11 c4d302cc468d Extracting [=======================================> ] 3.604MB/4.534MB 09:42:11 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 09:42:11 55f2b468da67 Extracting [==============================================> ] 239.5MB/257.9MB 09:42:11 eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 09:42:11 55f2b468da67 Extracting [===============================================> ] 243.4MB/257.9MB 09:42:11 eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB 09:42:11 eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 09:42:11 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 09:42:11 eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB 09:42:12 55f2b468da67 Extracting [================================================> ] 251.2MB/257.9MB 09:42:12 eabd8714fec9 Extracting [===============================> ] 232.8MB/375MB 09:42:12 04f6155c873d Pull complete 09:42:12 c4d302cc468d Pull complete 09:42:12 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 09:42:12 prometheus Pulled 09:42:12 eabd8714fec9 Extracting [===============================> ] 235.6MB/375MB 09:42:12 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB 09:42:12 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 09:42:12 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB 09:42:12 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 09:42:12 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 09:42:12 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 09:42:12 eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 09:42:12 eabd8714fec9 Extracting [===============================> ] 239MB/375MB 09:42:12 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB 09:42:12 01e0882c90d9 Pull complete 09:42:12 eabd8714fec9 Extracting [================================> ] 241.8MB/375MB 09:42:12 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB 09:42:12 eabd8714fec9 Extracting [================================> ] 245.1MB/375MB 09:42:12 eabd8714fec9 Extracting [=================================> ] 248.4MB/375MB 09:42:13 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB 09:42:13 eabd8714fec9 Extracting [=================================> ] 249MB/375MB 09:42:13 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB 09:42:13 eabd8714fec9 Extracting [=================================> ] 252.9MB/375MB 09:42:13 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB 09:42:13 eabd8714fec9 Extracting [=================================> ] 254.6MB/375MB 09:42:13 eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB 09:42:13 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB 09:42:13 eabd8714fec9 Extracting [===================================> ] 263.5MB/375MB 09:42:13 85dde7dceb0a Extracting [=======> ] 10.03MB/63.48MB 09:42:13 eabd8714fec9 Extracting [===================================> ] 266.8MB/375MB 09:42:13 85dde7dceb0a Extracting [=========> ] 12.26MB/63.48MB 09:42:13 eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 09:42:13 85dde7dceb0a Extracting [============> ] 15.6MB/63.48MB 09:42:14 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 09:42:14 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 09:42:14 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 09:42:14 531ee2cf3c0c Extracting [========> ] 1.376MB/8.066MB 09:42:14 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 09:42:14 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 09:42:14 531ee2cf3c0c Extracting [====================> ] 3.342MB/8.066MB 09:42:14 85dde7dceb0a Extracting [=============> ] 17.27MB/63.48MB 09:42:14 eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 09:42:14 85dde7dceb0a Extracting [==============> ] 18.38MB/63.48MB 09:42:14 531ee2cf3c0c Extracting [=============================> ] 4.719MB/8.066MB 09:42:15 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 09:42:15 85dde7dceb0a Extracting [================> ] 20.61MB/63.48MB 09:42:15 531ee2cf3c0c Extracting [====================================> ] 5.898MB/8.066MB 09:42:15 85dde7dceb0a Extracting [================> ] 21.17MB/63.48MB 09:42:15 eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 09:42:15 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 09:42:15 85dde7dceb0a Extracting [==================> ] 23.4MB/63.48MB 09:42:15 eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 09:42:15 85dde7dceb0a Extracting [=====================> ] 27.3MB/63.48MB 09:42:15 eabd8714fec9 Extracting [=====================================> ] 279.1MB/375MB 09:42:15 85dde7dceb0a Extracting [======================> ] 28.97MB/63.48MB 09:42:15 eabd8714fec9 Extracting [=====================================> ] 281.3MB/375MB 09:42:15 85dde7dceb0a Extracting [========================> ] 31.2MB/63.48MB 09:42:15 eabd8714fec9 Extracting [======================================> ] 286.9MB/375MB 09:42:15 85dde7dceb0a Extracting [==========================> ] 33.98MB/63.48MB 09:42:15 eabd8714fec9 Extracting [======================================> ] 292.5MB/375MB 09:42:15 85dde7dceb0a Extracting [=============================> ] 37.32MB/63.48MB 09:42:15 eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB 09:42:15 85dde7dceb0a Extracting [================================> ] 40.67MB/63.48MB 09:42:16 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 09:42:16 85dde7dceb0a Extracting [==================================> ] 44.01MB/63.48MB 09:42:16 eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB 09:42:16 85dde7dceb0a Extracting [====================================> ] 46.24MB/63.48MB 09:42:16 85dde7dceb0a Extracting [=======================================> ] 49.58MB/63.48MB 09:42:16 eabd8714fec9 Extracting [========================================> ] 300.8MB/375MB 09:42:16 85dde7dceb0a Extracting [========================================> ] 51.81MB/63.48MB 09:42:16 eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB 09:42:16 85dde7dceb0a Extracting [==========================================> ] 54.59MB/63.48MB 09:42:16 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 09:42:16 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 09:42:16 eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 09:42:16 eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 09:42:17 eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 09:42:17 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 09:42:17 55f2b468da67 Pull complete 09:42:17 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 09:42:18 eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB 09:42:18 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB 09:42:18 eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB 09:42:18 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 09:42:18 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 09:42:18 eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 09:42:18 82bfc142787e Extracting [> ] 98.3kB/8.613MB 09:42:18 eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 09:42:19 eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB 09:42:19 82bfc142787e Extracting [==> ] 491.5kB/8.613MB 09:42:19 eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 09:42:19 82bfc142787e Extracting [=============================================> ] 7.864MB/8.613MB 09:42:19 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 09:42:19 eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 09:42:19 eabd8714fec9 Extracting [============================================> ] 332MB/375MB 09:42:19 eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB 09:42:20 eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB 09:42:20 531ee2cf3c0c Pull complete 09:42:20 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 09:42:20 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 09:42:20 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 09:42:21 eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB 09:42:21 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 09:42:21 eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB 09:42:21 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 09:42:21 85dde7dceb0a Pull complete 09:42:21 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 09:42:21 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 09:42:21 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 09:42:21 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 09:42:21 eabd8714fec9 Extracting [================================================> ] 361MB/375MB 09:42:21 eabd8714fec9 Extracting [=================================================> ] 367.7MB/375MB 09:42:21 eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB 09:42:21 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 09:42:22 82bfc142787e Pull complete 09:42:22 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 09:42:22 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 09:42:25 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 09:42:25 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 09:42:25 ed54a7dee1d8 Pull complete 09:42:25 eabd8714fec9 Pull complete 09:42:25 12c5c803443f Extracting [==================================================>] 116B/116B 09:42:25 12c5c803443f Extracting [==================================================>] 116B/116B 09:42:25 7009d5001b77 Pull complete 09:42:25 46baca71a4ef Pull complete 09:42:25 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09:42:25 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09:42:25 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 09:42:25 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 09:42:25 12c5c803443f Pull complete 09:42:25 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 09:42:25 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 09:42:25 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 09:42:25 45fd2fec8a19 Pull complete 09:42:26 538deb30e80c Pull complete 09:42:26 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 09:42:26 grafana Pulled 09:42:26 b0e0ef7895f4 Extracting [============> ] 9.437MB/37.01MB 09:42:26 8f10199ed94b Extracting [==> ] 491.5kB/8.768MB 09:42:26 e27c75a98748 Pull complete 09:42:26 b0e0ef7895f4 Extracting [=========================> ] 18.87MB/37.01MB 09:42:26 8f10199ed94b Extracting [===============================================> ] 8.258MB/8.768MB 09:42:26 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 09:42:26 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 09:42:26 b0e0ef7895f4 Extracting [=======================================> ] 29.1MB/37.01MB 09:42:26 8f10199ed94b Pull complete 09:42:26 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09:42:26 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09:42:26 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 09:42:26 e73cb4a42719 Extracting [==> ] 5.571MB/109.1MB 09:42:26 b0e0ef7895f4 Pull complete 09:42:26 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 09:42:26 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 09:42:26 f963a77d2726 Pull complete 09:42:26 e73cb4a42719 Extracting [====> ] 9.47MB/109.1MB 09:42:26 c0c90eeb8aca Pull complete 09:42:26 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 09:42:26 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 09:42:26 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09:42:26 e73cb4a42719 Extracting [======> ] 13.37MB/109.1MB 09:42:26 5cfb27c10ea5 Pull complete 09:42:26 40a5eed61bb0 Extracting [==================================================>] 98B/98B 09:42:26 40a5eed61bb0 Extracting [==================================================>] 98B/98B 09:42:26 f3a82e9f1761 Extracting [=========> ] 8.716MB/44.41MB 09:42:26 e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 09:42:26 f3a82e9f1761 Extracting [=====================> ] 19.27MB/44.41MB 09:42:26 40a5eed61bb0 Pull complete 09:42:26 e040ea11fa10 Extracting [==================================================>] 173B/173B 09:42:26 e040ea11fa10 Extracting [==================================================>] 173B/173B 09:42:26 e73cb4a42719 Extracting [==========> ] 23.4MB/109.1MB 09:42:26 f3a82e9f1761 Extracting [==================================> ] 30.74MB/44.41MB 09:42:26 e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 09:42:26 e040ea11fa10 Pull complete 09:42:26 f3a82e9f1761 Extracting [================================================> ] 42.66MB/44.41MB 09:42:26 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 09:42:27 f3a82e9f1761 Pull complete 09:42:27 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 09:42:27 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 09:42:27 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 09:42:27 e73cb4a42719 Extracting [==============> ] 31.2MB/109.1MB 09:42:27 09d5a3f70313 Extracting [======> ] 13.93MB/109.2MB 09:42:27 e73cb4a42719 Extracting [================> ] 35.65MB/109.1MB 09:42:27 79161a3f5362 Pull complete 09:42:27 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 09:42:27 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 09:42:27 09d5a3f70313 Extracting [============> ] 27.85MB/109.2MB 09:42:27 e73cb4a42719 Extracting [===================> ] 41.78MB/109.1MB 09:42:27 9c266ba63f51 Pull complete 09:42:27 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 09:42:27 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 09:42:27 09d5a3f70313 Extracting [===================> ] 41.78MB/109.2MB 09:42:27 e73cb4a42719 Extracting [=====================> ] 47.35MB/109.1MB 09:42:27 09d5a3f70313 Extracting [==========================> ] 56.82MB/109.2MB 09:42:27 2e8a7df9c2ee Pull complete 09:42:27 10f05dd8b1db Extracting [==================================================>] 98B/98B 09:42:27 10f05dd8b1db Extracting [==================================================>] 98B/98B 09:42:27 e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 09:42:27 09d5a3f70313 Extracting [===============================> ] 68.52MB/109.2MB 09:42:27 e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB 09:42:27 10f05dd8b1db Pull complete 09:42:27 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09:42:27 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09:42:27 09d5a3f70313 Extracting [====================================> ] 80.22MB/109.2MB 09:42:27 e73cb4a42719 Extracting [=========================> ] 55.71MB/109.1MB 09:42:27 09d5a3f70313 Extracting [==========================================> ] 93.03MB/109.2MB 09:42:27 41dac8b43ba6 Pull complete 09:42:27 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 09:42:27 e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 09:42:27 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB 09:42:27 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 09:42:27 e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 09:42:27 71a9f6a9ab4d Pull complete 09:42:27 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09:42:27 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09:42:28 e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB 09:42:28 09d5a3f70313 Pull complete 09:42:28 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 09:42:28 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 09:42:28 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 09:42:28 e73cb4a42719 Extracting [==================================> ] 75.2MB/109.1MB 09:42:28 356f5c2c843b Pull complete 09:42:28 da3ed5db7103 Extracting [===> ] 8.356MB/127.4MB 09:42:28 kafka Pulled 09:42:28 e73cb4a42719 Extracting [====================================> ] 79.66MB/109.1MB 09:42:28 da3ed5db7103 Extracting [========> ] 20.61MB/127.4MB 09:42:28 e73cb4a42719 Extracting [=======================================> ] 85.79MB/109.1MB 09:42:28 da3ed5db7103 Extracting [=============> ] 33.98MB/127.4MB 09:42:28 e73cb4a42719 Extracting [=========================================> ] 91.36MB/109.1MB 09:42:28 da3ed5db7103 Extracting [================> ] 41.78MB/127.4MB 09:42:28 e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB 09:42:28 da3ed5db7103 Extracting [====================> ] 52.92MB/127.4MB 09:42:28 e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB 09:42:28 da3ed5db7103 Extracting [=========================> ] 65.73MB/127.4MB 09:42:28 e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 09:42:28 da3ed5db7103 Extracting [===============================> ] 79.66MB/127.4MB 09:42:28 e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 09:42:28 da3ed5db7103 Extracting [=====================================> ] 96.37MB/127.4MB 09:42:28 da3ed5db7103 Extracting [=========================================> ] 106.4MB/127.4MB 09:42:28 e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 09:42:29 da3ed5db7103 Extracting [==============================================> ] 119.2MB/127.4MB 09:42:29 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 09:42:29 da3ed5db7103 Extracting [================================================> ] 123.1MB/127.4MB 09:42:29 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 09:42:29 da3ed5db7103 Extracting [=================================================> ] 127MB/127.4MB 09:42:29 e73cb4a42719 Pull complete 09:42:29 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 09:42:29 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 09:42:29 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 09:42:29 da3ed5db7103 Pull complete 09:42:29 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 09:42:29 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 09:42:29 a83b68436f09 Pull complete 09:42:29 787d6bee9571 Extracting [==================================================>] 127B/127B 09:42:29 787d6bee9571 Extracting [==================================================>] 127B/127B 09:42:29 c955f6e31a04 Pull complete 09:42:29 zookeeper Pulled 09:42:29 787d6bee9571 Pull complete 09:42:29 13ff0988aaea Extracting [==================================================>] 167B/167B 09:42:29 13ff0988aaea Extracting [==================================================>] 167B/167B 09:42:29 13ff0988aaea Pull complete 09:42:29 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 09:42:29 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 09:42:29 4b82842ab819 Pull complete 09:42:29 7e568a0dc8fb Extracting [==================================================>] 184B/184B 09:42:29 7e568a0dc8fb Extracting [==================================================>] 184B/184B 09:42:29 7e568a0dc8fb Pull complete 09:42:30 postgres Pulled 09:42:30 Network compose_default Creating 09:42:30 Network compose_default Created 09:42:30 Container zookeeper Creating 09:42:30 Container prometheus Creating 09:42:30 Container postgres Creating 09:42:41 Container prometheus Created 09:42:41 Container grafana Creating 09:42:41 Container zookeeper Created 09:42:41 Container kafka Creating 09:42:41 Container postgres Created 09:42:41 Container policy-db-migrator Creating 09:42:41 Container policy-db-migrator Created 09:42:41 Container policy-api Creating 09:42:41 Container grafana Created 09:42:41 Container kafka Created 09:42:41 Container policy-api Created 09:42:41 Container policy-pap Creating 09:42:41 Container policy-pap Created 09:42:41 Container policy-opa-pdp Creating 09:42:41 Container policy-opa-pdp Created 09:42:41 Container prometheus Starting 09:42:41 Container zookeeper Starting 09:42:41 Container postgres Starting 09:42:42 Container postgres Started 09:42:42 Container policy-db-migrator Starting 09:42:43 Container policy-db-migrator Started 09:42:43 Container policy-api Starting 09:42:44 Container policy-api Started 09:42:44 Container prometheus Started 09:42:44 Container grafana Starting 09:42:45 Container grafana Started 09:42:46 Container zookeeper Started 09:42:46 Container kafka Starting 09:42:47 Container kafka Started 09:42:47 Container policy-pap Starting 09:42:48 Container policy-pap Started 09:42:48 Container policy-opa-pdp Starting 09:42:48 Container policy-opa-pdp Started 09:42:48 Prometheus server: http://localhost:30259 09:42:48 Grafana server: http://localhost:30269 09:42:48 Waiting 3 minutes for OPA-PDP to start... 09:45:48 Checking if REST port 30003 is open on localhost ... 09:45:49 IMAGE NAMES STATUS 09:45:49 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 09:45:49 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 09:45:49 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 09:45:49 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 09:45:49 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 09:45:49 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 09:45:49 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 09:45:49 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 09:45:49 Checking if REST port 30012 is open on localhost ... 09:45:49 IMAGE NAMES STATUS 09:45:49 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 09:45:49 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 09:45:49 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 09:45:49 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 09:45:49 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 09:45:49 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 09:45:49 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 09:45:49 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 09:45:49 Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/resources/tests/models'... 09:45:49 Building robot framework docker image 09:46:28 sha256:2b5eeeb31ec5fa2c9e984d67d0124f117988870de1b0d5e375fb54f638994548 09:46:32 top - 09:46:32 up 6 min, 0 users, load average: 1.37, 1.37, 0.73 09:46:32 Tasks: 219 total, 1 running, 148 sleeping, 0 stopped, 0 zombie 09:46:32 %Cpu(s): 10.2 us, 2.7 sy, 0.0 ni, 82.9 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st 09:46:32 09:46:32 total used free shared buff/cache available 09:46:32 Mem: 31G 2.3G 21G 28M 7.3G 28G 09:46:32 Swap: 1.0G 0B 1.0G 09:46:32 09:46:32 IMAGE NAMES STATUS 09:46:32 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 09:46:32 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 09:46:32 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 09:46:32 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 09:46:32 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 09:46:32 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 09:46:32 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 09:46:32 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 09:46:32 09:46:34 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 09:46:34 62a496ada155 policy-opa-pdp 0.17% 11.78MiB / 31.41GiB 0.04% 80.7kB / 78.1kB 0B / 0B 21 09:46:34 cc29dd0922f8 policy-pap 0.62% 482.9MiB / 31.41GiB 1.50% 2.21MB / 1.23MB 0B / 139MB 69 09:46:34 54e850b6a5af policy-api 0.12% 425.9MiB / 31.41GiB 1.32% 1.15MB / 1.05MB 0B / 0B 59 09:46:34 014fdf4d61f2 kafka 1.95% 385.6MiB / 31.41GiB 1.20% 306kB / 290kB 0B / 700kB 83 09:46:34 f47c84d071a6 grafana 0.20% 109.2MiB / 31.41GiB 0.34% 19MB / 216kB 0B / 31.2MB 19 09:46:34 272473cfb1e7 zookeeper 0.10% 84.16MiB / 31.41GiB 0.26% 57.8kB / 51.1kB 0B / 475kB 62 09:46:34 375dfc12a2c3 prometheus 0.00% 20.88MiB / 31.41GiB 0.06% 205kB / 10kB 0B / 0B 13 09:46:34 a69be0c50b4e postgres 1.50% 87.04MiB / 31.41GiB 0.27% 2.55MB / 3.74MB 242kB / 159MB 26 09:46:34 09:46:35 Container policy-csit Creating 09:46:35 Container policy-csit Created 09:46:35 Attaching to policy-csit 09:46:36 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 09:46:36 policy-csit | Run Robot test 09:46:36 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 09:46:36 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 09:46:36 policy-csit | -v POLICY_API_IP:policy-api:6969 09:46:36 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 09:46:36 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 09:46:36 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 09:46:36 policy-csit | -v APEX_IP:policy-apex-pdp:6969 09:46:36 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 09:46:36 policy-csit | -v KAFKA_IP:kafka:9092 09:46:36 policy-csit | -v PROMETHEUS_IP:prometheus:9090 09:46:36 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 09:46:36 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 09:46:36 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 09:46:36 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 09:46:36 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 09:46:36 policy-csit | -v TEMP_FOLDER:/tmp/distribution 09:46:36 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 09:46:36 policy-csit | -v TEST_ENV:docker 09:46:36 policy-csit | -v JAEGER_IP:jaeger:16686 09:46:36 policy-csit | Starting Robot test suites ... 09:46:36 policy-csit | ============================================================================== 09:46:36 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 09:46:36 policy-csit | ============================================================================== 09:46:36 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 09:46:36 policy-csit | ============================================================================== 09:46:36 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 09:46:36 policy-csit | ------------------------------------------------------------------------------ 09:46:36 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 09:46:36 policy-csit | ------------------------------------------------------------------------------ 09:47:03 policy-csit | ValidatesZonePolicy | PASS | 09:47:03 policy-csit | ------------------------------------------------------------------------------ 09:47:28 policy-csit | ValidatesVehiclePolicy | PASS | 09:47:28 policy-csit | ------------------------------------------------------------------------------ 09:47:54 policy-csit | ValidatesAbacPolicy | PASS | 09:47:54 policy-csit | ------------------------------------------------------------------------------ 09:47:54 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 09:47:54 policy-csit | 5 tests, 5 passed, 0 failed 09:47:54 policy-csit | ============================================================================== 09:47:54 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 09:47:54 policy-csit | ============================================================================== 09:48:54 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 09:48:54 policy-csit | ------------------------------------------------------------------------------ 09:48:54 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 09:48:54 policy-csit | ------------------------------------------------------------------------------ 09:48:54 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 09:48:54 policy-csit | ------------------------------------------------------------------------------ 09:48:54 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 09:48:54 policy-csit | ------------------------------------------------------------------------------ 09:48:54 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 09:48:54 policy-csit | ------------------------------------------------------------------------------ 09:48:54 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 09:48:54 policy-csit | 5 tests, 5 passed, 0 failed 09:48:54 policy-csit | ============================================================================== 09:48:54 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 09:48:54 policy-csit | 10 tests, 10 passed, 0 failed 09:48:54 policy-csit | ============================================================================== 09:48:54 policy-csit | Output: /tmp/results/output.xml 09:48:54 policy-csit | Log: /tmp/results/log.html 09:48:54 policy-csit | Report: /tmp/results/report.html 09:48:54 policy-csit | RESULT: 0 09:48:55 policy-csit exited with code 0 09:48:55 IMAGE NAMES STATUS 09:48:55 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes 09:48:55 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes 09:48:55 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes 09:48:55 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes 09:48:55 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes 09:48:55 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes 09:48:55 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes 09:48:55 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes 09:48:55 Shut down started! 09:48:56 Collecting logs from docker compose containers... 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775590325Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-19T09:42:45Z 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775883021Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775896422Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775900632Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775903532Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775906192Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775908982Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775912542Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775916402Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775919372Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775922122Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775924942Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775927862Z level=info msg=Target target=[all] 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775933812Z level=info msg="Path Home" path=/usr/share/grafana 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775937432Z level=info msg="Path Data" path=/var/lib/grafana 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775940132Z level=info msg="Path Logs" path=/var/log/grafana 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775944613Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775947833Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 09:48:57 grafana | logger=settings t=2025-06-19T09:42:45.775952633Z level=info msg="App mode production" 09:48:57 grafana | logger=featuremgmt t=2025-06-19T09:42:45.77630035Z level=info msg=FeatureToggles pluginsDetailsRightPanel=true publicDashboardsScene=true alertingNotificationsStepMode=true addFieldFromCalculationStatFunctions=true alertingRuleVersionHistoryRestore=true prometheusAzureOverrideAudience=true unifiedRequestLog=true kubernetesClientDashboardsFolders=true dashboardScene=true grafanaconThemes=true pinNavItems=true annotationPermissionUpdate=true alertingInsights=true angularDeprecationUI=true formatString=true externalCorePlugins=true alertingApiServer=true nestedFolders=true kubernetesPlaylists=true prometheusUsesCombobox=true cloudWatchCrossAccountQuerying=true logRowsPopoverMenu=true lokiLabelNamesQueryApi=true tlsMemcached=true recoveryThreshold=true lokiQueryHints=true lokiQuerySplitting=true dataplaneFrontendFallback=true useSessionStorageForRedirection=true ssoSettingsApi=true logsContextDatasourceUi=true influxdbBackendMigration=true alertingUIOptimizeReducer=true reportingUseRawTimeRange=true azureMonitorPrometheusExemplars=true promQLScope=true alertingRuleRecoverDeleted=true groupToNestedTableTransformation=true recordedQueriesMulti=true awsAsyncQueryCaching=true dashboardSceneForViewers=true dashgpt=true logsInfiniteScrolling=true azureMonitorEnableUserAuth=true dashboardSceneSolo=true lokiStructuredMetadata=true failWrongDSUID=true newDashboardSharingComponent=true onPremToCloudMigrations=true correlations=true newPDFRendering=true alertingQueryAndExpressionsStepMode=true panelMonitoring=true cloudWatchNewLabelParsing=true transformationsRedesign=true cloudWatchRoundUpEndTime=true alertRuleRestore=true ssoSettingsSAML=true newFiltersUI=true unifiedStorageSearchPermissionFiltering=true logsExploreTableVisualisation=true logsPanelControls=true alertingRulePermanentlyDelete=true preinstallAutoUpdate=true alertingSimplifiedRouting=true 09:48:57 grafana | logger=sqlstore t=2025-06-19T09:42:45.776358771Z level=info msg="Connecting to DB" dbtype=sqlite3 09:48:57 grafana | logger=sqlstore t=2025-06-19T09:42:45.776373602Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.778036678Z level=info msg="Locking database" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.778048758Z level=info msg="Starting DB migrations" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.778714762Z level=info msg="Executing migration" id="create migration_log table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.779597251Z level=info msg="Migration successfully executed" id="create migration_log table" duration=882.149µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.792542901Z level=info msg="Executing migration" id="create user table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.794283919Z level=info msg="Migration successfully executed" id="create user table" duration=1.745048ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.79855437Z level=info msg="Executing migration" id="add unique index user.login" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.799310336Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=755.876µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.806078993Z level=info msg="Executing migration" id="add unique index user.email" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.807078754Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=997.291µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.811641833Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.812765537Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.122754ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.816378435Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.81703869Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=656.634µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.822762383Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.826373131Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.609078ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.83004787Z level=info msg="Executing migration" id="create user table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.831351278Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.302928ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.835106409Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.835795804Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=689.095µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.842112641Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.843214174Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.100623ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.84766108Z level=info msg="Executing migration" id="copy data_source v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.848275953Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=614.793µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.852056505Z level=info msg="Executing migration" id="Drop old table user_v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.852537765Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=481.13µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.856972141Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.858146636Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.173905ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.864265329Z level=info msg="Executing migration" id="Update user table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.86430973Z level=info msg="Migration successfully executed" id="Update user table charset" duration=45.131µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.868794526Z level=info msg="Executing migration" id="Add last_seen_at column to user" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.870694017Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.898931ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.87504253Z level=info msg="Executing migration" id="Add missing user data" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.875391759Z level=info msg="Migration successfully executed" id="Add missing user data" duration=348.799µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.879253132Z level=info msg="Executing migration" id="Add is_disabled column to user" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.880321544Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.068012ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.885729281Z level=info msg="Executing migration" id="Add index user.login/user.email" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.886850956Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.120245ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.890967644Z level=info msg="Executing migration" id="Add is_service_account column to user" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.892772564Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.807719ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.896939503Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.905224752Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.284369ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.911233741Z level=info msg="Executing migration" id="Add uid column to user" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.912349156Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.114885ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.916391483Z level=info msg="Executing migration" id="Update uid column values for users" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.916763781Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=372.308µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.920805678Z level=info msg="Executing migration" id="Add unique index user_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.921983073Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.176445ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.92597676Z level=info msg="Executing migration" id="Add is_provisioned column to user" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.927125424Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.148204ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.934248699Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.934798Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=549.571µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.939167174Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.940014293Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=846.779µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.946114344Z level=info msg="Executing migration" id="update login and email fields to lowercase" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.946996333Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=885.929µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.955156319Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.955516227Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=359.838µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.963931118Z level=info msg="Executing migration" id="create temp user table v1-7" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.96493855Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.007892ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.969943288Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.970679415Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=733.747µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.975901316Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.976622583Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=720.747µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.981659791Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.982402387Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=742.426µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.987765803Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.988491839Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=725.736µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.9931976Z level=info msg="Executing migration" id="Update temp_user table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.993223641Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=26.611µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.998437943Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:45.999129758Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=691.745µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.003700886Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.004363581Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=662.465µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.008580801Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.009587922Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.006661ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.014647621Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.015311485Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=663.664µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.019569867Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.024769979Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.198191ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.029440809Z level=info msg="Executing migration" id="create temp_user v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.030376339Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=935.33µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.035366676Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.036133572Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=766.876µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.039942884Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.04067351Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=730.156µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.045101615Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.045887131Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=785.286µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.051125554Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.05186991Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=745.806µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.056439438Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.057034Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=593.952µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.061849564Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.062637611Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=787.596µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.068842314Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.069228663Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=386.119µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.073571375Z level=info msg="Executing migration" id="create star table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.074647339Z level=info msg="Migration successfully executed" id="create star table" duration=1.075764ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.079008062Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.080160597Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.152025ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.084457059Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.085956192Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.495732ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.094803261Z level=info msg="Executing migration" id="Add column org_id in star" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.096593249Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.791688ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.101725219Z level=info msg="Executing migration" id="Add column updated in star" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.103223542Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.497913ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.107180716Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.107988574Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=807.398µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.111954689Z level=info msg="Executing migration" id="create org table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.112717815Z level=info msg="Migration successfully executed" id="create org table v1" duration=763.086µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.120591464Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.121886353Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.293568ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.126610543Z level=info msg="Executing migration" id="create org_user table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.127366359Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=752.176µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.13298522Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.134021373Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.034883ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.138774265Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.140106803Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.332628ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.14604539Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.147261307Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.205177ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.151238902Z level=info msg="Executing migration" id="Update org table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.151286773Z level=info msg="Migration successfully executed" id="Update org table charset" duration=48.691µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.155635667Z level=info msg="Executing migration" id="Update org_user table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.155674008Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=39.591µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.16002208Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.160351317Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=329.507µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.166705604Z level=info msg="Executing migration" id="create dashboard table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.16790083Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.194787ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.17164229Z level=info msg="Executing migration" id="add index dashboard.account_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.172473428Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=830.799µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.176606406Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.177478365Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=871.579µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.181382099Z level=info msg="Executing migration" id="create dashboard_tag table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.182137045Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=752.036µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.188537672Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.189875661Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.336469ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.194216974Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.195307378Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.090554ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.19964788Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.205369503Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.716723ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.212317553Z level=info msg="Executing migration" id="create dashboard v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.21313386Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=816.097µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.21874262Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.219573198Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=830.258µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.226063318Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.227466977Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.400099ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.233994188Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.234384836Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=388.298µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.240424286Z level=info msg="Executing migration" id="drop table dashboard_v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.241726723Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.301907ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.247569039Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.247597789Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=30.311µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.252175138Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.255233323Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.057795ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.262333926Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.264479221Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.145345ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.270103562Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.272102545Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.998643ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.276220074Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.277085722Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=865.428µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.28490488Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.288592799Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.611767ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.294588948Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.295376884Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=787.526µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.300956004Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.301779772Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=824.018µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.308500636Z level=info msg="Executing migration" id="Update dashboard table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.308530267Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=30.301µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.312798709Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.3128386Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=40.851µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.317854517Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.320983875Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.128798ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.325243865Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.327533765Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.28949ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.332758827Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.335451645Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.692478ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.340064953Z level=info msg="Executing migration" id="Add column uid in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.342166148Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.097655ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.34641023Z level=info msg="Executing migration" id="Update uid column values in dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.346680266Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=268.016µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.352156323Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.353068463Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=911.099µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.357363775Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.358842376Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.477951ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.365184522Z level=info msg="Executing migration" id="Update dashboard title length" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.365230193Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=47.191µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.371496618Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.372873317Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.376089ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.381267818Z level=info msg="Executing migration" id="create dashboard_provisioning" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.382568956Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.301029ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.387411179Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.392981229Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.56924ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.39862385Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.399343196Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=719.056µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.403055455Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.404311952Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.255507ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.410470945Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.411363884Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=892.809µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.417598817Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.417947184Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=348.427µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.422432681Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.423288559Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=855.278µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.427541641Z level=info msg="Executing migration" id="Add check_sum column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.43123186Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.689339ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.438411633Z level=info msg="Executing migration" id="Add index for dashboard_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.440131271Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.718527ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.444156597Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.444564786Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=412.959µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.449265826Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.449641754Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=344.207µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.454620261Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.455996981Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.37572ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.460565399Z level=info msg="Executing migration" id="Add isPublic for dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.463269277Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.703098ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.470676946Z level=info msg="Executing migration" id="Add deleted for dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.473133269Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.455793ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.477183515Z level=info msg="Executing migration" id="Add index for deleted" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.478140447Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=973.421µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.482670763Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.485227148Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.555805ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.49140048Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.493996376Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.595076ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.498546534Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.499012284Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=465.89µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.504705956Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.507064557Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.368791ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.518061803Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.519558484Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.500131ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.525950922Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.526753889Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=798.987µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.53097499Z level=info msg="Executing migration" id="create data_source table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.532018572Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.043972ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.535601089Z level=info msg="Executing migration" id="add index data_source.account_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.536448147Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=846.848µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.54260668Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.543352545Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=748.555µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.547414022Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.548611068Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.199806ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.55292705Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.554125217Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.197357ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.562227701Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.570542579Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.313888ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.576446456Z level=info msg="Executing migration" id="create data_source table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.578325275Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.878069ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.58224136Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.583152969Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=911.609µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.591524539Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.592733305Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.212036ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.600299537Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.600953752Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=654.715µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.608442292Z level=info msg="Executing migration" id="Add column with_credentials" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.610614669Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.177377ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.614272107Z level=info msg="Executing migration" id="Add secure json data column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.6167516Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.479173ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.624110888Z level=info msg="Executing migration" id="Update data_source table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.624145529Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=35.611µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.629389962Z level=info msg="Executing migration" id="Update initial version to 1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.629595556Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=205.564µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.632993499Z level=info msg="Executing migration" id="Add read_only data column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.635481532Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.487793ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.639690372Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.639966688Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=280.116µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.644184819Z level=info msg="Executing migration" id="Update json_data with nulls" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.644361563Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=176.954µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.650392672Z level=info msg="Executing migration" id="Add uid column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.653040989Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.648047ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.660437608Z level=info msg="Executing migration" id="Update uid value" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.661164653Z level=info msg="Migration successfully executed" id="Update uid value" duration=730.755µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.667730494Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.668712325Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=982.221µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.674408418Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.675778377Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.370079ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.680025778Z level=info msg="Executing migration" id="Add is_prunable column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.682971731Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.945433ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.693290762Z level=info msg="Executing migration" id="Add api_version column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.696946411Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.655929ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.700689771Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.700710902Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=22.071µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.704628756Z level=info msg="Executing migration" id="create api_key table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.705426293Z level=info msg="Migration successfully executed" id="create api_key table" duration=797.277µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.711405151Z level=info msg="Executing migration" id="add index api_key.account_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.716488291Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=5.082819ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.72253712Z level=info msg="Executing migration" id="add index api_key.key" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.723690235Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.156364ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.730472321Z level=info msg="Executing migration" id="add index api_key.account_id_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.731770478Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.297827ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.735985799Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.736913159Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=928.099µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.746521224Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.74768078Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.159676ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.752489283Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.753691079Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.201875ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.760759471Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.768559797Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.799756ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.778322017Z level=info msg="Executing migration" id="create api_key table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.779302058Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=983.041µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.783291024Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.78402897Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=737.036µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.792527032Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.793346369Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=819.127µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.799097723Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.79985333Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=755.146µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.806635495Z level=info msg="Executing migration" id="copy api_key v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.806955841Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=320.456µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.813493171Z level=info msg="Executing migration" id="Drop old table api_key_v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.814111965Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=618.934µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.818170592Z level=info msg="Executing migration" id="Update api_key table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.818197543Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=27.791µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.82415037Z level=info msg="Executing migration" id="Add expires to api_key table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.826819757Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.668187ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.83717933Z level=info msg="Executing migration" id="Add service account foreign key" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.840053282Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.874162ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.843933495Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.844128489Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=189.774µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.848012833Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.850540897Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.527494ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.861470912Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.86606369Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.596618ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.870425894Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.871290862Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=864.948µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.87541845Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.876506774Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.084724ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.885865944Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.887193333Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.326659ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.894135042Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.895224066Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.089754ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.899464336Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.900858257Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.393971ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.904872302Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.905763671Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=891.579µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.909915591Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.909933311Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=18.21µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.91593506Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.915976411Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=64.452µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.924398631Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.929135533Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.729922ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.933459156Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.935910098Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.451603ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.941934407Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.941956038Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=22.271µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.948784065Z level=info msg="Executing migration" id="create quota table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.949614182Z level=info msg="Migration successfully executed" id="create quota table v1" duration=830.347µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.954452346Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.955302674Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=849.308µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.96116126Z level=info msg="Executing migration" id="Update quota table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.961189671Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.071µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.965263399Z level=info msg="Executing migration" id="create plugin_setting table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.966133227Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=869.638µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.973526576Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.97559579Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=2.070334ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.983979159Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.988519097Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.540928ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.994166358Z level=info msg="Executing migration" id="Update plugin_setting table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:46.99424643Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=80.782µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.000926403Z level=info msg="Executing migration" id="update NULL org_id to 1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.00125597Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=329.367µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.008231085Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.022172053Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=13.941958ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.028251987Z level=info msg="Executing migration" id="create session table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.028798399Z level=info msg="Migration successfully executed" id="create session table" duration=546.522µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.037754216Z level=info msg="Executing migration" id="Drop old table playlist table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.037891149Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=137.483µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.042309827Z level=info msg="Executing migration" id="Drop old table playlist_item table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.04245508Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=145.713µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.048240838Z level=info msg="Executing migration" id="create playlist table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.049902255Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.661077ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.054824294Z level=info msg="Executing migration" id="create playlist item table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.055499228Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=674.604µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.060755834Z level=info msg="Executing migration" id="Update playlist table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.060781875Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.421µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.066158794Z level=info msg="Executing migration" id="Update playlist_item table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.066197344Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=39.731µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.070839396Z level=info msg="Executing migration" id="Add playlist column created_at" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.075576291Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.737355ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.079874485Z level=info msg="Executing migration" id="Add playlist column updated_at" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.084540868Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.665343ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.092198947Z level=info msg="Executing migration" id="drop preferences table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.092288909Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=90.442µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.099494948Z level=info msg="Executing migration" id="drop preferences table v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.099625542Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=130.904µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.106621045Z level=info msg="Executing migration" id="create preferences table v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.109306725Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=2.68362ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.120014661Z level=info msg="Executing migration" id="Update preferences table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.120047492Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=34.711µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.12540914Z level=info msg="Executing migration" id="Add column team_id in preferences" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.131877082Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=6.464922ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.136923404Z level=info msg="Executing migration" id="Update team_id column values in preferences" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.137103248Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=179.864µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.144625094Z level=info msg="Executing migration" id="Add column week_start in preferences" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.149358888Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.732304ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.157580149Z level=info msg="Executing migration" id="Add column preferences.json_data" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.160716218Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.138909ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.164790428Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.164833269Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=43.431µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.171913585Z level=info msg="Executing migration" id="Add preferences index org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.173560592Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.633816ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.178494291Z level=info msg="Executing migration" id="Add preferences index user_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.179466322Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=972.171µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.186661611Z level=info msg="Executing migration" id="create alert table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.1884708Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.812189ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.196977668Z level=info msg="Executing migration" id="add index alert org_id & id " 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.198479331Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.501783ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.20252292Z level=info msg="Executing migration" id="add index alert state" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.203734277Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.208897ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.209947644Z level=info msg="Executing migration" id="add index alert dashboard_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.210921525Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=977.301µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.21518211Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.216211062Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.028472ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.224224829Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.225606619Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.38111ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.233111455Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.234466525Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.35426ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.238539314Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.249168649Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.630195ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.254061237Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.254543958Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=482.391µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.264411375Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.265780145Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.36698ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.269926516Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.270341435Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=413.439µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.274387335Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.275168102Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=780.297µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.28143926Z level=info msg="Executing migration" id="create alert_notification table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.282192428Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=752.867µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.286079113Z level=info msg="Executing migration" id="Add column is_default" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.291815089Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.731086ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.297902633Z level=info msg="Executing migration" id="Add column frequency" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.301742159Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.839086ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.308364494Z level=info msg="Executing migration" id="Add column send_reminder" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.312333061Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.967017ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.317049556Z level=info msg="Executing migration" id="Add column disable_resolve_message" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.321154127Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.104281ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.34172063Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.343890058Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=2.168348ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.35396721Z level=info msg="Executing migration" id="Update alert table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.354020981Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=70.262µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.360201558Z level=info msg="Executing migration" id="Update alert_notification table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.360259639Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=62.721µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.365479503Z level=info msg="Executing migration" id="create notification_journal table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.366365014Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=881.641µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.372131301Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.373068431Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=937.29µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.377726824Z level=info msg="Executing migration" id="drop alert_notification_journal" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.378496051Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=769.187µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.385300892Z level=info msg="Executing migration" id="create alert_notification_state table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.386374965Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.074665ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.39296389Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.394251349Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.289669ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.40158483Z level=info msg="Executing migration" id="Add for to alert table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.405753292Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.168172ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.410379004Z level=info msg="Executing migration" id="Add column uid in alert_notification" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.414497085Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.118571ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.418226937Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.41836481Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=138.753µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.427377409Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.428691128Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.312989ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.43467165Z level=info msg="Executing migration" id="Remove unique index org_id_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.435284193Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=612.373µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.438907144Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.442235507Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.327663ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.450428748Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.450448718Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=21.001µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.454304803Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.455191272Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=886.099µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.459851645Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.460696243Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=844.398µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.470649723Z level=info msg="Executing migration" id="Drop old annotation table v4" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.470826937Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=175.174µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.476585824Z level=info msg="Executing migration" id="create annotation table v5" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.477717179Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.131155ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.483896066Z level=info msg="Executing migration" id="add index annotation 0 v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.485623923Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.731818ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.489631162Z level=info msg="Executing migration" id="add index annotation 1 v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.49045652Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=824.768µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.500685936Z level=info msg="Executing migration" id="add index annotation 2 v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.501674497Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=988.461µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.506037834Z level=info msg="Executing migration" id="add index annotation 3 v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.507403464Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.36812ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.516027324Z level=info msg="Executing migration" id="add index annotation 4 v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.517523117Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.490803ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.522545187Z level=info msg="Executing migration" id="Update annotation table charset" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.52261284Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=72.103µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.527094908Z level=info msg="Executing migration" id="Add column region_id to annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.530971724Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.876535ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.534336558Z level=info msg="Executing migration" id="Drop category_id index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.534950341Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=613.483µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.541804222Z level=info msg="Executing migration" id="Add column tags to annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.546479925Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.678813ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.553101271Z level=info msg="Executing migration" id="Create annotation_tag table v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.553829328Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=757.778µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.557973909Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.55938155Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.407241ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.563740576Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.565069856Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.32965ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.57254949Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.586468257Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.932177ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.590456065Z level=info msg="Executing migration" id="Create annotation_tag table v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.591363635Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=911.39µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.5960918Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.597772306Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.680626ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.604021365Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.604542706Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=524.881µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.608473642Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.609060345Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=586.233µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.612702806Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.613012003Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=309.407µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.620816215Z level=info msg="Executing migration" id="Add created time to annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.625005637Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.189282ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.628944174Z level=info msg="Executing migration" id="Add updated time to annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.632927382Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.982569ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.636718615Z level=info msg="Executing migration" id="Add index for created in annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.637730298Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.011313ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.647096604Z level=info msg="Executing migration" id="Add index for updated in annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.648368472Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.265748ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.652642517Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.653017245Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=374.128µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.656724816Z level=info msg="Executing migration" id="Add epoch_end column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.660789496Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.06427ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.669440927Z level=info msg="Executing migration" id="Add index for epoch_end" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.670348277Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=907.08µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.680705766Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.680953021Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=248.156µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.684702153Z level=info msg="Executing migration" id="Move region to single row" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.685051822Z level=info msg="Migration successfully executed" id="Move region to single row" duration=349.189µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.689235464Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.690121713Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=885.309µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.696506504Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.697262971Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=756.247µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.704883399Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.705984383Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.103704ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.714315167Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.715630185Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.317858ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.719634374Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.720768149Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.136205ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.724629944Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.726003354Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.37347ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.734596554Z level=info msg="Executing migration" id="Increase tags column to length 4096" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.734642395Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=50.901µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.739788508Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.739831829Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=52.501µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.744029912Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.744058393Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=31.441µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.750610617Z level=info msg="Executing migration" id="create test_data table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.752391517Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.745199ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.758866599Z level=info msg="Executing migration" id="create dashboard_version table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.75983646Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=972.591µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.764505084Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.765435024Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=931.33µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.769894183Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.771246482Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.35513ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.777204193Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.77838404Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=1.181357ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.784126296Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.784581516Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=455.76µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.790992067Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.791020098Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=30.101µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.796672102Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.804294801Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=7.621099ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.808531825Z level=info msg="Executing migration" id="create team table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.80921578Z level=info msg="Migration successfully executed" id="create team table" duration=684.405µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.814866454Z level=info msg="Executing migration" id="add index team.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.816472409Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.608575ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.822543964Z level=info msg="Executing migration" id="add unique index team_org_id_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.823636837Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.093633ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.827503342Z level=info msg="Executing migration" id="Add column uid in team" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.832432361Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.926949ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.83645018Z level=info msg="Executing migration" id="Update uid column values in team" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.836686986Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=236.966µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.843450574Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.844669561Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.218637ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.848810983Z level=info msg="Executing migration" id="Add column external_uid in team" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.855302045Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=6.489272ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.859449547Z level=info msg="Executing migration" id="Add column is_provisioned in team" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.864371636Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.921679ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.871111454Z level=info msg="Executing migration" id="create team member table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.87228741Z level=info msg="Migration successfully executed" id="create team member table" duration=1.165956ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.876951483Z level=info msg="Executing migration" id="add index team_member.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.879086531Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=2.135927ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.884165513Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.885262226Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.097643ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.890457911Z level=info msg="Executing migration" id="add index team_member.team_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.891480483Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.023462ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.897003045Z level=info msg="Executing migration" id="Add column email to team table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.902865914Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.859459ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.90716639Z level=info msg="Executing migration" id="Add column external to team_member table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.912549909Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.382258ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.918152932Z level=info msg="Executing migration" id="Add column permission to team_member table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.925013483Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=6.858651ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.929611425Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.93075967Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.155586ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.935095256Z level=info msg="Executing migration" id="create dashboard acl table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.936076927Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=989.542µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.939996024Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.941067467Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.070973ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.949130125Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.950591467Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.463902ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.954761599Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.956325763Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.563544ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.961045638Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.962203413Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.157915ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.967803756Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.969178537Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.374681ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.973862931Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.975958896Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=2.095865ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.983196036Z level=info msg="Executing migration" id="add index dashboard_permission" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.984346001Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.149015ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.989512035Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.990094918Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=563.802µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.998477753Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:47.998999405Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=520.562µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.004919674Z level=info msg="Executing migration" id="create tag table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.005941475Z level=info msg="Migration successfully executed" id="create tag table" duration=1.025511ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.011072526Z level=info msg="Executing migration" id="add index tag.key_value" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.012228751Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.155615ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.019768544Z level=info msg="Executing migration" id="create login attempt table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.021382559Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.613805ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.025643101Z level=info msg="Executing migration" id="add index login_attempt.username" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.027203045Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.563325ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.032665863Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.034216625Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.538042ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.040037641Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.05340878Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.371309ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.057630021Z level=info msg="Executing migration" id="create login_attempt v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.058452329Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=822.159µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.064894197Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.066519163Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.624936ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.071073951Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.071548491Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=474.651µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.076875306Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.077586292Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=710.636µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.082047847Z level=info msg="Executing migration" id="create user auth table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.083273064Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.224937ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.092395221Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.093928033Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.531772ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.098597835Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.098622175Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=25.52µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.102643082Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.108611311Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.966169ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.114541799Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.120014507Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.472538ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.125782911Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.131801891Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.01786ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.136451722Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.145754862Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=9.29838ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.152092359Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.155033802Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=2.944133ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.16325663Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.173222155Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.960696ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.184621501Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.189458045Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=4.837834ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.195565027Z level=info msg="Executing migration" id="create server_lock table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.196894095Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.330608ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.203235342Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.204391207Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.156125ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.210388527Z level=info msg="Executing migration" id="create user auth token table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.212292238Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.903661ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.2216715Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.223564721Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.896401ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.22908822Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.230547731Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.459541ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.236134472Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.237350279Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.215637ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.247586359Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.254171792Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.587863ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.258243419Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.260670822Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=2.424672ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.267925018Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.27680561Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.880262ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.282136355Z level=info msg="Executing migration" id="create cache_data table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.283496694Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.363939ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.291274692Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.292352145Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.082593ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.299230194Z level=info msg="Executing migration" id="create short_url table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.300418539Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.180835ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.306815407Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.310214641Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=3.399064ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.31482185Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.31485297Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=26.9µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.319042251Z level=info msg="Executing migration" id="delete alert_definition table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.319133443Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=91.422µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.325579342Z level=info msg="Executing migration" id="recreate alert_definition table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.327299709Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.718207ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.332298297Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.334012074Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.736147ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.338347218Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.339538813Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.193005ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.345034412Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.345063602Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=27.51µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.351097232Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.352314539Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.218827ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.356987079Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.358450931Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.464882ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.3639397Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.365048593Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.108573ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.369134642Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.370187974Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.052642ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.375460988Z level=info msg="Executing migration" id="Add column paused in alert_definition" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.386106388Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=10.6499ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.391702409Z level=info msg="Executing migration" id="drop alert_definition table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.392400364Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=697.885µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.397815751Z level=info msg="Executing migration" id="delete alert_definition_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.397885362Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=69.961µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.402073262Z level=info msg="Executing migration" id="recreate alert_definition_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.403389441Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.319669ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.412074118Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.413495319Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.424571ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.420909409Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.42235917Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.452922ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.426322135Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.426370877Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=54.951µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.430504286Z level=info msg="Executing migration" id="drop alert_definition_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.431808044Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.302088ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.438740514Z level=info msg="Executing migration" id="create alert_instance table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.440615474Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.87466ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.44783048Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.452792787Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=4.963377ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.459043142Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.460695217Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.651295ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.465149323Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.474843582Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.695779ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.480507904Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.481443395Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=934.971µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.486559046Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.488127219Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.567523ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.492650007Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.521141011Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.485704ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.528088271Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.552822765Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=24.734784ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.557080087Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.558062978Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=979.661µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.563766311Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.565163082Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.39615ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.570482426Z level=info msg="Executing migration" id="add current_reason column related to current_state" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.57669313Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.210014ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.580459091Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.58689673Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.436769ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.592311367Z level=info msg="Executing migration" id="create alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.59384667Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.534633ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.600050604Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.601961955Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.911561ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.606533753Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.607792621Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.258968ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.612318789Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.614742011Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.420661ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.621814514Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.621946036Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=132.722µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.627521997Z level=info msg="Executing migration" id="add column for to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.634356884Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.833207ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.638771569Z level=info msg="Executing migration" id="add column annotations to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.644322689Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.55141ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.653097748Z level=info msg="Executing migration" id="add column labels to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.65779963Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.701462ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.661781836Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.662679305Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=897.339µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.666258133Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.667129991Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=869.578µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.672459536Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.676915023Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.454717ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.681091882Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.687722755Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.630373ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.691704462Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.69258473Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=880.158µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.697850234Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.704386585Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.50599ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.711668682Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.718589861Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.920079ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.722811143Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.722951836Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=141.333µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.72874139Z level=info msg="Executing migration" id="create alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.730617311Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.875161ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.734939704Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.736296074Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.35781ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.740553565Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.742383105Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.82847ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.748107168Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.748234211Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=128.093µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.752465893Z level=info msg="Executing migration" id="add column for to alert_rule_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.760060836Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.597133ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.770268367Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.777673227Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.40429ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.782227234Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.790163036Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.935882ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.798354933Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.805514206Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.154193ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.810115006Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.814753737Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.63873ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.818793213Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.818865595Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=73.042µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.83068217Z level=info msg="Executing migration" id=create_alert_configuration_table 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.831980338Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.298549ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.836411824Z level=info msg="Executing migration" id="Add column default in alert_configuration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.843729611Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.333908ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.847969173Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.848013994Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=45.301µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.8538356Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.860148946Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.312676ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.864899459Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.866233527Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.333248ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.873232308Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.883731904Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=10.500796ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.889774065Z level=info msg="Executing migration" id=create_ngalert_configuration_table 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.890584483Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=809.928µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.896338247Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.897583233Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.244526ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.903598603Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.914003127Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.404044ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.921366207Z level=info msg="Executing migration" id="create provenance_type table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.922146033Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=779.416µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.926960367Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.92800353Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.026272ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.932612849Z level=info msg="Executing migration" id="create alert_image table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.933429627Z level=info msg="Migration successfully executed" id="create alert_image table" duration=814.578µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.939892566Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.941666964Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.772868ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.953369627Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.953410228Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=42.591µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.958748223Z level=info msg="Executing migration" id=create_alert_configuration_history_table 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.960727436Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.979413ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.970946157Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.974024203Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=3.076716ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.983537258Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.984313604Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.989273232Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.990092779Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=818.777µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.994519705Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:48.996519758Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.999173ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.008409255Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.018082253Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.671578ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.022275104Z level=info msg="Executing migration" id="create library_element table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.023582972Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.312298ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.028018978Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.02955873Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.539092ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.037137094Z level=info msg="Executing migration" id="create library_element_connection table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.038248299Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.110895ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.043064202Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.044378801Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.314009ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.04851729Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.049963491Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.444311ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.056595224Z level=info msg="Executing migration" id="increase max description length to 2048" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.05688645Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=290.836µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.06292809Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.062953691Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=26.071µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.066756274Z level=info msg="Executing migration" id="add library_element folder uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.07450413Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=7.745986ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.078306543Z level=info msg="Executing migration" id="populate library_element folder_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.078687381Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=381.118µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.085708752Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.08794505Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=2.239738ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.092489819Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.093243585Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=757.516µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.098175921Z level=info msg="Executing migration" id="create data_keys table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.099311495Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.140724ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.10365119Z level=info msg="Executing migration" id="create secrets table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.104555379Z level=info msg="Migration successfully executed" id="create secrets table" duration=904.069µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.111312055Z level=info msg="Executing migration" id="rename data_keys name column to id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.151447071Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=40.126986ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.156848907Z level=info msg="Executing migration" id="add name column into data_keys" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.162686403Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.836896ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.166624068Z level=info msg="Executing migration" id="copy data_keys id column values into name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.166887183Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=262.415µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.171021813Z level=info msg="Executing migration" id="rename data_keys name column to label" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.208660535Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=37.631762ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.214180224Z level=info msg="Executing migration" id="rename data_keys id column back to name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.243955177Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.771922ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.248440374Z level=info msg="Executing migration" id="create kv_store table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.249813583Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.376459ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.254255939Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.255305071Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.043762ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.264325386Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.264587631Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=262.835µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.268352293Z level=info msg="Executing migration" id="create permission table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.269168901Z level=info msg="Migration successfully executed" id="create permission table" duration=816.238µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.273902103Z level=info msg="Executing migration" id="add unique index permission.role_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.274841023Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=940.97µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.283316226Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.284255166Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=935.51µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.288187131Z level=info msg="Executing migration" id="create role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.28905329Z level=info msg="Migration successfully executed" id="create role table" duration=866.419µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.294895316Z level=info msg="Executing migration" id="add column display_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.302282725Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.386579ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.313080348Z level=info msg="Executing migration" id="add column group_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.320500108Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.42195ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.32519866Z level=info msg="Executing migration" id="add index role.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.326818485Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.622106ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.331116107Z level=info msg="Executing migration" id="add unique index role_org_id_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.332111078Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=994.141µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.338807713Z level=info msg="Executing migration" id="add index role_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.339823745Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.013442ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.343888783Z level=info msg="Executing migration" id="create team role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.34468464Z level=info msg="Migration successfully executed" id="create team role table" duration=795.917µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.352482108Z level=info msg="Executing migration" id="add index team_role.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.353562801Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.080703ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.364821274Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.366051411Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.231227ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.371651702Z level=info msg="Executing migration" id="add index team_role.team_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.372657654Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.005792ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.376826193Z level=info msg="Executing migration" id="create user role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.378089731Z level=info msg="Migration successfully executed" id="create user role table" duration=1.267798ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.384589111Z level=info msg="Executing migration" id="add index user_role.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.385782747Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.189656ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.389732992Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.390690522Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=957.84µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.394676949Z level=info msg="Executing migration" id="add index user_role.user_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.395929525Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.255876ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.402294223Z level=info msg="Executing migration" id="create builtin role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.403230803Z level=info msg="Migration successfully executed" id="create builtin role table" duration=937.25µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.406983314Z level=info msg="Executing migration" id="add index builtin_role.role_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.408112149Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.128935ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.412142225Z level=info msg="Executing migration" id="add index builtin_role.name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.41328877Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.145515ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.42165987Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.436513482Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=14.844801ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.442227925Z level=info msg="Executing migration" id="add index builtin_role.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.444229317Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.003792ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.451170058Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.451990725Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=820.678µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.455989942Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.457157347Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.167925ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.462709246Z level=info msg="Executing migration" id="add unique index role.uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.463862681Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.152625ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.471419414Z level=info msg="Executing migration" id="create seed assignment table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.47258248Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.162376ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.482033073Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.483213328Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.180195ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.487270747Z level=info msg="Executing migration" id="add column hidden to role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.497042097Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.768311ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.504499628Z level=info msg="Executing migration" id="permission kind migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.510680231Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.176383ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.514669298Z level=info msg="Executing migration" id="permission attribute migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.52126363Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.593092ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.526265868Z level=info msg="Executing migration" id="permission identifier migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.532840179Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.568291ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.543544251Z level=info msg="Executing migration" id="add permission identifier index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.545383141Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.83644ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.551792899Z level=info msg="Executing migration" id="add permission action scope role_id index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.553377612Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.593334ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.557960541Z level=info msg="Executing migration" id="remove permission role_id action scope index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.559073465Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.115234ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.563220805Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.569624633Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=6.403588ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.577777369Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.57877071Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=993.881µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.58341029Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.584193257Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=779.067µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.588374278Z level=info msg="Executing migration" id="create query_history table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.589202575Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=828.247µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.594551291Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.595776508Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.224877ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.599767754Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.599795154Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=32.491µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.606063929Z level=info msg="Executing migration" id="create query_history_details table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.607623033Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.558354ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.614238666Z level=info msg="Executing migration" id="rbac disabled migrator" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.614302937Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=65.651µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.620070431Z level=info msg="Executing migration" id="teams permissions migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.620574082Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=503.241µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.624854225Z level=info msg="Executing migration" id="dashboard permissions" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.62604849Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.197985ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.632847057Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.633610724Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=763.377µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.640106254Z level=info msg="Executing migration" id="drop managed folder create actions" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.640456682Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=349.959µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.644943258Z level=info msg="Executing migration" id="alerting notification permissions" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.645534061Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=590.333µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.650032298Z level=info msg="Executing migration" id="create query_history_star table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.651489489Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.452721ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.659093913Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.660724689Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.629436ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.66681458Z level=info msg="Executing migration" id="add column org_id in query_history_star" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.677643084Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.823724ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.68209972Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.68211689Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=17.43µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.68808663Z level=info msg="Executing migration" id="create correlation table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.689834967Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.738816ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.697782848Z level=info msg="Executing migration" id="add index correlations.uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.698968824Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.188265ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.703734317Z level=info msg="Executing migration" id="add index correlations.source_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.705107227Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.36396ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.7103939Z level=info msg="Executing migration" id="add correlation config column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.721568501Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.175651ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.727940479Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.729634905Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.694896ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.734731785Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.736441593Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.709988ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.742620886Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.767247517Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.626261ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.775004995Z level=info msg="Executing migration" id="create correlation v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.775901864Z level=info msg="Migration successfully executed" id="create correlation v2" duration=897.189µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.783223051Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.784744375Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.525154ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.799109074Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.800610767Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.505313ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.808121729Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.809433077Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.309988ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.813628637Z level=info msg="Executing migration" id="copy correlation v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.813891443Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=263.346µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.81790467Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.818803859Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=898.839µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.827212641Z level=info msg="Executing migration" id="add provisioning column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.835675483Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.462682ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.842170154Z level=info msg="Executing migration" id="add type column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.853497438Z level=info msg="Migration successfully executed" id="add type column" duration=11.324614ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.864901654Z level=info msg="Executing migration" id="create entity_events table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.866797195Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.895771ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.87725348Z level=info msg="Executing migration" id="create dashboard public config v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.878993348Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.741498ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.884910126Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.885476558Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.890053577Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.890615119Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.894897351Z level=info msg="Executing migration" id="Drop old dashboard public config table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.896011336Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.109994ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.910421856Z level=info msg="Executing migration" id="recreate dashboard public config v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.912246396Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.82518ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.916683781Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.918440879Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.751438ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.926533014Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.927810321Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.280207ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.932401681Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.933203178Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=801.237µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.938093293Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.938847729Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=754.476µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.944021001Z level=info msg="Executing migration" id="Drop public config table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.944809408Z level=info msg="Migration successfully executed" id="Drop public config table" duration=787.737µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.951312928Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.953026125Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.712777ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.959192948Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.960394235Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.201587ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.966630009Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.967913566Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.282517ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.973688431Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.974917838Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.233327ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:49.982948481Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.0096515Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=26.697829ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.016285445Z level=info msg="Executing migration" id="add annotations_enabled column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.029101814Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=12.809289ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.036538137Z level=info msg="Executing migration" id="add time_selection_enabled column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.048226061Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=11.692794ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.056286807Z level=info msg="Executing migration" id="delete orphaned public dashboards" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.056545393Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=258.666µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.060594741Z level=info msg="Executing migration" id="add share column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.070057428Z level=info msg="Migration successfully executed" id="add share column" duration=9.457347ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.076906697Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.077094081Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=187.324µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.080746311Z level=info msg="Executing migration" id="create file table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.081854935Z level=info msg="Migration successfully executed" id="create file table" duration=1.108604ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.085917894Z level=info msg="Executing migration" id="file table idx: path natural pk" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.087978918Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.059514ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.097261542Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.099716994Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.454122ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.104460618Z level=info msg="Executing migration" id="create file_meta table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.106146775Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.685277ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.111561033Z level=info msg="Executing migration" id="file table idx: path key" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.113443074Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.885631ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.121485119Z level=info msg="Executing migration" id="set path collation in file table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.12151175Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=26.351µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.12562683Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.125656341Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=30.811µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.129572626Z level=info msg="Executing migration" id="managed permissions migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.130414704Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=841.678µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.135066236Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.13528363Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=217.154µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.208894016Z level=info msg="Executing migration" id="RBAC action name migrator" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.211158996Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.268239ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.244266728Z level=info msg="Executing migration" id="Add UID column to playlist" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.255133045Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.866727ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.34519405Z level=info msg="Executing migration" id="Update uid column values in playlist" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.345537827Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=334.428µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.447546763Z level=info msg="Executing migration" id="Add index for uid in playlist" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.45015376Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.606757ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.661195083Z level=info msg="Executing migration" id="update group index for alert rules" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.662283576Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=1.093903ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.729267118Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.729892231Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=632.184µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.748958767Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.750050312Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.095775ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.988206647Z level=info msg="Executing migration" id="add action column to seed_assignment" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:50.999929172Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.723835ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.243122968Z level=info msg="Executing migration" id="add scope column to seed_assignment" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.250473817Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.35483ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.325074505Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.326921716Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.847461ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.597419266Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.688202986Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=90.78134ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.73600686Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.738010763Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.006493ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.977443476Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:51.978884937Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.444961ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.091588825Z level=info msg="Executing migration" id="add primary key to seed_assigment" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.119417642Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=27.831597ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.144174153Z level=info msg="Executing migration" id="add origin column to seed_assignment" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.15134851Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.174567ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.175890285Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.176699903Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=812.527µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.21144898Z level=info msg="Executing migration" id="prevent seeding OnCall access" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.212009892Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=562.672µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.236936656Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.237792095Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=856.879µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.263459485Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.26416324Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=706.185µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.290341271Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.290838973Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=497.232µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.305774578Z level=info msg="Executing migration" id="create folder table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.307632999Z level=info msg="Migration successfully executed" id="create folder table" duration=1.854361ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.322015173Z level=info msg="Executing migration" id="Add index for parent_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.324122139Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.107506ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.346416904Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.348615502Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.188638ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.381723075Z level=info msg="Executing migration" id="Update folder title length" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.381857588Z level=info msg="Migration successfully executed" id="Update folder title length" duration=130.703µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.401926135Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.404132604Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.205179ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.617987018Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.619617084Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.634966ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.689627112Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:52.691649676Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.025785ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:53.030203781Z level=info msg="Executing migration" id="Sync dashboard and folder table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:53.031003619Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=800.997µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:53.352758998Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:53.354213449Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=1.458122ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:53.747848676Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:53.750157497Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.31055ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.076465054Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.078957039Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.499745ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.32838887Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.330991127Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.604987ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.587719327Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.589287361Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.570754ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.600071506Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.601556778Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.485782ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.609157344Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.611578507Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=2.424253ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.616569156Z level=info msg="Executing migration" id="create anon_device table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.617586518Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.017182ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.621212887Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.622417824Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.204787ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.629170001Z level=info msg="Executing migration" id="add index anon_device.updated_at" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.630921279Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.755328ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.641255305Z level=info msg="Executing migration" id="create signing_key table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.642646945Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.39438ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.647046471Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.648337579Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.290818ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.656294793Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.658321027Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.026444ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.66762258Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.668026659Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=404.888µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.676150326Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.686053502Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.902266ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.690689613Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.691710036Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.023283ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.700124749Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.70014948Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=26.261µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.703952392Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.705304532Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.35165ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.708845689Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.70886436Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=16.641µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.712499129Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.713829998Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.330649ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.721198568Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.725143255Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=3.942527ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.734224563Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.736166616Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.943823ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.742590975Z level=info msg="Executing migration" id="create sso_setting table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.74420692Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.615375ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.748357781Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.749304942Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=948.94µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.755130419Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.755619769Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=497.041µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.765072606Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.765954245Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=881.789µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.770718159Z level=info msg="Executing migration" id="create cloud_migration table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.772297473Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.578404ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.776389753Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.777524117Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.133964ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.786154885Z level=info msg="Executing migration" id="add stack_id column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.798682329Z level=info msg="Migration successfully executed" id="add stack_id column" duration=12.525394ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.803279129Z level=info msg="Executing migration" id="add region_slug column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.817401678Z level=info msg="Migration successfully executed" id="add region_slug column" duration=14.115898ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.823439849Z level=info msg="Executing migration" id="add cluster_slug column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.831909804Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=8.468815ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.838022427Z level=info msg="Executing migration" id="add migration uid column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.848173769Z level=info msg="Migration successfully executed" id="add migration uid column" duration=10.149952ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.855181131Z level=info msg="Executing migration" id="Update uid column values for migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.855365115Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=183.564µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.859453065Z level=info msg="Executing migration" id="Add unique index migration_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.861763975Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.309599ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.867849488Z level=info msg="Executing migration" id="add migration run uid column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.878175313Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.326295ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.883745314Z level=info msg="Executing migration" id="Update uid column values for migration run" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.884306236Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=563.322µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.89360638Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.896221907Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.612857ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.901515543Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.926448006Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=24.925903ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.931606588Z level=info msg="Executing migration" id="create cloud_migration_session v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.93305396Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.537484ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.938670903Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.94082017Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.152597ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.947608058Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.948425585Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=820.628µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.953970436Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.955678604Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.710998ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.960444638Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.989657405Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=29.208157ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.993966539Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:54.994704125Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=737.396µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.004784404Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.007185977Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=2.405383ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.01282401Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.013482695Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=658.365µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.017513743Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.018513404Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=998.962µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.024804262Z level=info msg="Executing migration" id="add snapshot upload_url column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.033706086Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=8.901904ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.038787067Z level=info msg="Executing migration" id="add snapshot status column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.046960954Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=8.172687ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.055411199Z level=info msg="Executing migration" id="add snapshot local_directory column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.065232223Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.820464ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.071230504Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.080728472Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.497478ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.085101297Z level=info msg="Executing migration" id="add snapshot encryption_key column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.093286785Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=8.184448ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.101167528Z level=info msg="Executing migration" id="add snapshot error_string column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.110755036Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=9.586479ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.121228015Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.122739588Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.510843ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.127505422Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.165257156Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.752714ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.169802564Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.177459932Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.656327ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.184764961Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.194501393Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.735312ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.208913897Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.218055078Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=9.140871ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.2222941Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.231984061Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.687651ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.238374911Z level=info msg="Executing migration" id="increase resource_uid column length" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.238395431Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=23.661µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.245649829Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.245676549Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=28.15µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.250430203Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.263321445Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.892452ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.270367888Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.28006046Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.691492ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.284345513Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.28465469Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=308.967µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.288531095Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.288804831Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=273.416µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.292953571Z level=info msg="Executing migration" id="add record column to alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.302712074Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=9.750913ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.310055104Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.319832577Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.776793ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.328936666Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.341701355Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=12.765579ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.347774877Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.357453638Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.677721ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.363190193Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.363616773Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=423.75µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.371264969Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.384698412Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=13.434523ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.392929512Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.403209236Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=10.278964ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.410888903Z level=info msg="Executing migration" id="delete orphaned service account permissions" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.411284723Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=395.39µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.415514825Z level=info msg="Executing migration" id="adding action set permissions" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.416084957Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=569.713µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.42218263Z level=info msg="Executing migration" id="create user_external_session table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.423330355Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.146875ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.429197593Z level=info msg="Executing migration" id="increase name_id column length to 1024" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.429225954Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=29.781µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.437088765Z level=info msg="Executing migration" id="increase session_id column length to 1024" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.437108186Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=54.131µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.442465953Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.442828721Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=360.148µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.448021094Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.458970693Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=10.939219ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.468628323Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.481230408Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=12.598604ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.491545613Z level=info msg="Executing migration" id="add alert_rule_state table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.492852552Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.310989ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.497219837Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.498132627Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=912.17µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.503427082Z level=info msg="Executing migration" id="add guid column to alert_rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.510758942Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=7.32583ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.515374523Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.528496039Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=13.122566ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.534151212Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.534221934Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.534635664Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.534693485Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=535.653µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.54136912Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.541969273Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=599.443µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.546266897Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.547274339Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.006582ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.553005754Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.554063146Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.056532ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.559668369Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.561404227Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.735498ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.569446462Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.571822374Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.379052ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.578245964Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.588392226Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.145622ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.595992281Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.609113448Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=13.125727ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.614329372Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.625228109Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=10.898698ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.629728637Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.63852194Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=8.797153ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.643064338Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.643273513Z level=info msg="Removed 0 datasources:drilldown permissions" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.643289973Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=226.115µs 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.648233341Z level=info msg="Executing migration" id="remove title in folder unique index" 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.649932058Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.698497ms 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.657279238Z level=info msg="migrations completed" performed=654 skipped=0 duration=9.878600896s 09:48:57 grafana | logger=migrator t=2025-06-19T09:42:55.658332332Z level=info msg="Unlocking database" 09:48:57 grafana | logger=sqlstore t=2025-06-19T09:42:55.675060917Z level=info msg="Created default admin" user=admin 09:48:57 grafana | logger=sqlstore t=2025-06-19T09:42:55.675329232Z level=info msg="Created default organization" 09:48:57 grafana | logger=secrets t=2025-06-19T09:42:55.680861843Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 09:48:57 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T09:42:55.788079882Z level=info msg="Restored cache from database" duration=579.183µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.800423841Z level=info msg="Locking database" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.800462281Z level=info msg="Starting DB migrations" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.807994866Z level=info msg="Executing migration" id="create resource_migration_log table" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.809009258Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=1.014012ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.818137748Z level=info msg="Executing migration" id="Initialize resource tables" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.818154168Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=17.05µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.822581074Z level=info msg="Executing migration" id="drop table resource" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.822784028Z level=info msg="Migration successfully executed" id="drop table resource" duration=202.434µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.828510903Z level=info msg="Executing migration" id="create table resource" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.829962265Z level=info msg="Migration successfully executed" id="create table resource" duration=1.454272ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.83428363Z level=info msg="Executing migration" id="create table resource, index: 0" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.835609099Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.324849ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.841895915Z level=info msg="Executing migration" id="drop table resource_history" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.842017428Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=123.323µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.847750793Z level=info msg="Executing migration" id="create table resource_history" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.849675005Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.925062ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.853775844Z level=info msg="Executing migration" id="create table resource_history, index: 0" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.855189296Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.412842ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.859154902Z level=info msg="Executing migration" id="create table resource_history, index: 1" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.860456001Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.301109ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.86866262Z level=info msg="Executing migration" id="drop table resource_version" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.868864974Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=209.874µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.874197151Z level=info msg="Executing migration" id="create table resource_version" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.875321585Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.124374ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.88061617Z level=info msg="Executing migration" id="create table resource_version, index: 0" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.882044382Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.427822ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.887121472Z level=info msg="Executing migration" id="drop table resource_blob" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.887228514Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=106.962µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.897467278Z level=info msg="Executing migration" id="create table resource_blob" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.89849127Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.023632ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.905410581Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.906802991Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.3945ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.912307041Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.914875487Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.567926ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.923544076Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.934444054Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.897868ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.944004773Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.958275405Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=14.270542ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.962916435Z level=info msg="Executing migration" id="Add index to resource_history for polling" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.963957459Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.038283ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.967740471Z level=info msg="Executing migration" id="Add index to resource for loading" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.969002629Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.259968ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.974644152Z level=info msg="Executing migration" id="Add column folder in resource_history" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.990529658Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=15.872325ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:55.994945354Z level=info msg="Executing migration" id="Add column folder in resource" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.006020276Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=11.073852ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.011405154Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 09:48:57 grafana | logger=deletion-marker-migrator t=2025-06-19T09:42:56.011435184Z level=info msg="finding any deletion markers" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.011925785Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=520.641µs 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.018013987Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.019458559Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.443752ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.023722682Z level=info msg="Executing migration" id="Add generation to resource history" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.035624861Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.901249ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.041088741Z level=info msg="Executing migration" id="Add generation index to resource history" 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.042488782Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.400991ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.049009234Z level=info msg="migrations completed" performed=26 skipped=0 duration=241.059368ms 09:48:57 grafana | logger=resource-migrator t=2025-06-19T09:42:56.049681448Z level=info msg="Unlocking database" 09:48:57 grafana | t=2025-06-19T09:42:56.049935623Z level=info caller=logger.go:214 time=2025-06-19T09:42:56.049912713Z msg="Using channel notifier" logger=sql-resource-server 09:48:57 grafana | logger=plugin.store t=2025-06-19T09:42:56.063134512Z level=info msg="Loading plugins..." 09:48:57 grafana | logger=plugins.registration t=2025-06-19T09:42:56.099598067Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 09:48:57 grafana | logger=plugins.initialization t=2025-06-19T09:42:56.099629248Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 09:48:57 grafana | logger=plugin.store t=2025-06-19T09:42:56.09973494Z level=info msg="Plugins loaded" count=53 duration=36.602318ms 09:48:57 grafana | logger=query_data t=2025-06-19T09:42:56.105029546Z level=info msg="Query Service initialization" 09:48:57 grafana | logger=live.push_http t=2025-06-19T09:42:56.109744869Z level=info msg="Live Push Gateway initialization" 09:48:57 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-19T09:42:56.128505788Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 09:48:57 grafana | logger=ngalert t=2025-06-19T09:42:56.143464004Z level=info msg="Using simple database alert instance store" 09:48:57 grafana | logger=ngalert.state.manager.persist t=2025-06-19T09:42:56.143592487Z level=info msg="Using sync state persister" 09:48:57 grafana | logger=infra.usagestats.collector t=2025-06-19T09:42:56.145974719Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 09:48:57 grafana | logger=ngalert.state.manager t=2025-06-19T09:42:56.146334157Z level=info msg="Warming state cache for startup" 09:48:57 grafana | logger=ngalert.state.manager t=2025-06-19T09:42:56.147519782Z level=info msg="State cache has been initialized" states=0 duration=1.185085ms 09:48:57 grafana | logger=http.server t=2025-06-19T09:42:56.148884113Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 09:48:57 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-19T09:42:56.149014595Z level=info msg="Starting MultiOrg Alertmanager" 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:56.149170589Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 09:48:57 grafana | logger=ngalert.scheduler t=2025-06-19T09:42:56.149201529Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 09:48:57 grafana | logger=ticker t=2025-06-19T09:42:56.149379833Z level=info msg=starting first_tick=2025-06-19T09:43:00Z 09:48:57 grafana | logger=grafanaStorageLogger t=2025-06-19T09:42:56.149568897Z level=info msg="Storage starting" 09:48:57 grafana | logger=plugins.update.checker t=2025-06-19T09:42:56.250259044Z level=info msg="Update check succeeded" duration=84.624906ms 09:48:57 grafana | logger=grafana.update.checker t=2025-06-19T09:42:56.251064051Z level=info msg="Update check succeeded" duration=86.127358ms 09:48:57 grafana | logger=provisioning.datasources t=2025-06-19T09:42:56.263554604Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 09:48:57 grafana | logger=sqlstore.transactions t=2025-06-19T09:42:56.275666438Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 09:48:57 grafana | logger=sqlstore.transactions t=2025-06-19T09:42:56.289082601Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 09:48:57 grafana | logger=sqlstore.transactions t=2025-06-19T09:42:56.2927466Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 09:48:57 grafana | logger=sqlstore.transactions t=2025-06-19T09:42:56.300544201Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 09:48:57 grafana | logger=sqlstore.transactions t=2025-06-19T09:42:56.304294072Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 09:48:57 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T09:42:56.319341171Z level=info msg="Patterns update finished" duration=135.21181ms 09:48:57 grafana | logger=provisioning.alerting t=2025-06-19T09:42:56.319888803Z level=info msg="starting to provision alerting" 09:48:57 grafana | logger=provisioning.alerting t=2025-06-19T09:42:56.319925624Z level=info msg="finished to provision alerting" 09:48:57 grafana | logger=provisioning.dashboard t=2025-06-19T09:42:56.322737765Z level=info msg="starting to provision dashboards" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.664657204Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.666006723Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.679987968Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.682610565Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.684487366Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.685468007Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.686722715Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.688793661Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 09:48:57 grafana | logger=grafana-apiserver t=2025-06-19T09:42:56.691583641Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 09:48:57 grafana | logger=app-registry t=2025-06-19T09:42:56.749091185Z level=info msg="app registry initialized" 09:48:57 grafana | logger=plugin.installer t=2025-06-19T09:42:56.869543433Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 09:48:57 grafana | logger=installer.fs t=2025-06-19T09:42:56.952790149Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 09:48:57 grafana | logger=plugins.registration t=2025-06-19T09:42:56.997906063Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:56.998013296Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=848.805356ms 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:56.998081137Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 09:48:57 grafana | logger=provisioning.dashboard t=2025-06-19T09:42:57.299983793Z level=info msg="finished to provision dashboards" 09:48:57 grafana | logger=plugin.installer t=2025-06-19T09:42:57.307499846Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 09:48:57 grafana | logger=installer.fs t=2025-06-19T09:42:57.443480763Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 09:48:57 grafana | logger=plugins.registration t=2025-06-19T09:42:57.468706543Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:57.468736304Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=470.642727ms 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:57.468764095Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 09:48:57 grafana | logger=plugin.installer t=2025-06-19T09:42:57.848210861Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 09:48:57 grafana | logger=installer.fs t=2025-06-19T09:42:57.906900632Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 09:48:57 grafana | logger=plugins.registration t=2025-06-19T09:42:57.922880831Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:57.922933362Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=454.162827ms 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:57.923010233Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 09:48:57 grafana | logger=plugin.installer t=2025-06-19T09:42:58.192062542Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 09:48:57 grafana | logger=installer.fs t=2025-06-19T09:42:58.254992486Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 09:48:57 grafana | logger=plugins.registration t=2025-06-19T09:42:58.271044685Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 09:48:57 grafana | logger=plugin.backgroundinstaller t=2025-06-19T09:42:58.271103946Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=348.066102ms 09:48:57 grafana | logger=infra.usagestats t=2025-06-19T09:44:16.175302603Z level=info msg="Usage stats are ready to report" 09:48:57 kafka | ===> User 09:48:57 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:48:57 kafka | ===> Configuring ... 09:48:57 kafka | Running in Zookeeper mode... 09:48:57 kafka | ===> Running preflight checks ... 09:48:57 kafka | ===> Check if /var/lib/kafka/data is writable ... 09:48:57 kafka | ===> Check if Zookeeper is healthy ... 09:48:57 kafka | [2025-06-19 09:42:51,095] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,096] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,099] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,102] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:48:57 kafka | [2025-06-19 09:42:51,107] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:48:57 kafka | [2025-06-19 09:42:51,113] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:51,131] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:51,132] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:51,142] INFO Socket connection established, initiating session, client: /172.17.0.8:42132, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:51,266] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002b9120000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:51,407] INFO Session: 0x1000002b9120000 closed (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:51,408] INFO EventThread shut down for session: 0x1000002b9120000 (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | Using log4j config /etc/kafka/log4j.properties 09:48:57 kafka | ===> Launching ... 09:48:57 kafka | ===> Launching kafka ... 09:48:57 kafka | [2025-06-19 09:42:52,254] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 09:48:57 kafka | [2025-06-19 09:42:52,688] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:48:57 kafka | [2025-06-19 09:42:52,768] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 09:48:57 kafka | [2025-06-19 09:42:52,770] INFO starting (kafka.server.KafkaServer) 09:48:57 kafka | [2025-06-19 09:42:52,770] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 09:48:57 kafka | [2025-06-19 09:42:52,783] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,787] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,788] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,790] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@584f54e6 (org.apache.zookeeper.ZooKeeper) 09:48:57 kafka | [2025-06-19 09:42:52,793] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:48:57 kafka | [2025-06-19 09:42:52,799] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:52,803] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 09:48:57 kafka | [2025-06-19 09:42:52,807] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:52,812] INFO Socket connection established, initiating session, client: /172.17.0.8:60924, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:52,969] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000002b9120001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 09:48:57 kafka | [2025-06-19 09:42:52,974] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 09:48:57 kafka | [2025-06-19 09:42:54,736] INFO Cluster ID = o0KanLTCS6-DDA3O4PV1_w (kafka.server.KafkaServer) 09:48:57 kafka | [2025-06-19 09:42:54,739] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 09:48:57 kafka | [2025-06-19 09:42:54,792] INFO KafkaConfig values: 09:48:57 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 09:48:57 kafka | alter.config.policy.class.name = null 09:48:57 kafka | alter.log.dirs.replication.quota.window.num = 11 09:48:57 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 09:48:57 kafka | authorizer.class.name = 09:48:57 kafka | auto.create.topics.enable = true 09:48:57 kafka | auto.include.jmx.reporter = true 09:48:57 kafka | auto.leader.rebalance.enable = true 09:48:57 kafka | background.threads = 10 09:48:57 kafka | broker.heartbeat.interval.ms = 2000 09:48:57 kafka | broker.id = 1 09:48:57 kafka | broker.id.generation.enable = true 09:48:57 kafka | broker.rack = null 09:48:57 kafka | broker.session.timeout.ms = 9000 09:48:57 kafka | client.quota.callback.class = null 09:48:57 kafka | compression.type = producer 09:48:57 kafka | connection.failed.authentication.delay.ms = 100 09:48:57 kafka | connections.max.idle.ms = 600000 09:48:57 kafka | connections.max.reauth.ms = 0 09:48:57 kafka | control.plane.listener.name = null 09:48:57 kafka | controlled.shutdown.enable = true 09:48:57 kafka | controlled.shutdown.max.retries = 3 09:48:57 kafka | controlled.shutdown.retry.backoff.ms = 5000 09:48:57 kafka | controller.listener.names = null 09:48:57 kafka | controller.quorum.append.linger.ms = 25 09:48:57 kafka | controller.quorum.election.backoff.max.ms = 1000 09:48:57 kafka | controller.quorum.election.timeout.ms = 1000 09:48:57 kafka | controller.quorum.fetch.timeout.ms = 2000 09:48:57 kafka | controller.quorum.request.timeout.ms = 2000 09:48:57 kafka | controller.quorum.retry.backoff.ms = 20 09:48:57 kafka | controller.quorum.voters = [] 09:48:57 kafka | controller.quota.window.num = 11 09:48:57 kafka | controller.quota.window.size.seconds = 1 09:48:57 kafka | controller.socket.timeout.ms = 30000 09:48:57 kafka | create.topic.policy.class.name = null 09:48:57 kafka | default.replication.factor = 1 09:48:57 kafka | delegation.token.expiry.check.interval.ms = 3600000 09:48:57 kafka | delegation.token.expiry.time.ms = 86400000 09:48:57 kafka | delegation.token.master.key = null 09:48:57 kafka | delegation.token.max.lifetime.ms = 604800000 09:48:57 kafka | delegation.token.secret.key = null 09:48:57 kafka | delete.records.purgatory.purge.interval.requests = 1 09:48:57 kafka | delete.topic.enable = true 09:48:57 kafka | early.start.listeners = null 09:48:57 kafka | fetch.max.bytes = 57671680 09:48:57 kafka | fetch.purgatory.purge.interval.requests = 1000 09:48:57 kafka | group.initial.rebalance.delay.ms = 3000 09:48:57 kafka | group.max.session.timeout.ms = 1800000 09:48:57 kafka | group.max.size = 2147483647 09:48:57 kafka | group.min.session.timeout.ms = 6000 09:48:57 kafka | initial.broker.registration.timeout.ms = 60000 09:48:57 kafka | inter.broker.listener.name = PLAINTEXT 09:48:57 kafka | inter.broker.protocol.version = 3.4-IV0 09:48:57 kafka | kafka.metrics.polling.interval.secs = 10 09:48:57 kafka | kafka.metrics.reporters = [] 09:48:57 kafka | leader.imbalance.check.interval.seconds = 300 09:48:57 kafka | leader.imbalance.per.broker.percentage = 10 09:48:57 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 09:48:57 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 09:48:57 kafka | log.cleaner.backoff.ms = 15000 09:48:57 kafka | log.cleaner.dedupe.buffer.size = 134217728 09:48:57 kafka | log.cleaner.delete.retention.ms = 86400000 09:48:57 kafka | log.cleaner.enable = true 09:48:57 kafka | log.cleaner.io.buffer.load.factor = 0.9 09:48:57 kafka | log.cleaner.io.buffer.size = 524288 09:48:57 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 09:48:57 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 09:48:57 kafka | log.cleaner.min.cleanable.ratio = 0.5 09:48:57 kafka | log.cleaner.min.compaction.lag.ms = 0 09:48:57 kafka | log.cleaner.threads = 1 09:48:57 kafka | log.cleanup.policy = [delete] 09:48:57 kafka | log.dir = /tmp/kafka-logs 09:48:57 kafka | log.dirs = /var/lib/kafka/data 09:48:57 kafka | log.flush.interval.messages = 9223372036854775807 09:48:57 kafka | log.flush.interval.ms = null 09:48:57 kafka | log.flush.offset.checkpoint.interval.ms = 60000 09:48:57 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 09:48:57 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 09:48:57 kafka | log.index.interval.bytes = 4096 09:48:57 kafka | log.index.size.max.bytes = 10485760 09:48:57 kafka | log.message.downconversion.enable = true 09:48:57 kafka | log.message.format.version = 3.0-IV1 09:48:57 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 09:48:57 kafka | log.message.timestamp.type = CreateTime 09:48:57 kafka | log.preallocate = false 09:48:57 kafka | log.retention.bytes = -1 09:48:57 kafka | log.retention.check.interval.ms = 300000 09:48:57 kafka | log.retention.hours = 168 09:48:57 kafka | log.retention.minutes = null 09:48:57 kafka | log.retention.ms = null 09:48:57 kafka | log.roll.hours = 168 09:48:57 kafka | log.roll.jitter.hours = 0 09:48:57 kafka | log.roll.jitter.ms = null 09:48:57 kafka | log.roll.ms = null 09:48:57 kafka | log.segment.bytes = 1073741824 09:48:57 kafka | log.segment.delete.delay.ms = 60000 09:48:57 kafka | max.connection.creation.rate = 2147483647 09:48:57 kafka | max.connections = 2147483647 09:48:57 kafka | max.connections.per.ip = 2147483647 09:48:57 kafka | max.connections.per.ip.overrides = 09:48:57 kafka | max.incremental.fetch.session.cache.slots = 1000 09:48:57 kafka | message.max.bytes = 1048588 09:48:57 kafka | metadata.log.dir = null 09:48:57 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 09:48:57 kafka | metadata.log.max.snapshot.interval.ms = 3600000 09:48:57 kafka | metadata.log.segment.bytes = 1073741824 09:48:57 kafka | metadata.log.segment.min.bytes = 8388608 09:48:57 kafka | metadata.log.segment.ms = 604800000 09:48:57 kafka | metadata.max.idle.interval.ms = 500 09:48:57 kafka | metadata.max.retention.bytes = 104857600 09:48:57 kafka | metadata.max.retention.ms = 604800000 09:48:57 kafka | metric.reporters = [] 09:48:57 kafka | metrics.num.samples = 2 09:48:57 kafka | metrics.recording.level = INFO 09:48:57 kafka | metrics.sample.window.ms = 30000 09:48:57 kafka | min.insync.replicas = 1 09:48:57 kafka | node.id = 1 09:48:57 kafka | num.io.threads = 8 09:48:57 kafka | num.network.threads = 3 09:48:57 kafka | num.partitions = 1 09:48:57 kafka | num.recovery.threads.per.data.dir = 1 09:48:57 kafka | num.replica.alter.log.dirs.threads = null 09:48:57 kafka | num.replica.fetchers = 1 09:48:57 kafka | offset.metadata.max.bytes = 4096 09:48:57 kafka | offsets.commit.required.acks = -1 09:48:57 kafka | offsets.commit.timeout.ms = 5000 09:48:57 kafka | offsets.load.buffer.size = 5242880 09:48:57 kafka | offsets.retention.check.interval.ms = 600000 09:48:57 kafka | offsets.retention.minutes = 10080 09:48:57 kafka | offsets.topic.compression.codec = 0 09:48:57 kafka | offsets.topic.num.partitions = 50 09:48:57 kafka | offsets.topic.replication.factor = 1 09:48:57 kafka | offsets.topic.segment.bytes = 104857600 09:48:57 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 09:48:57 kafka | password.encoder.iterations = 4096 09:48:57 kafka | password.encoder.key.length = 128 09:48:57 kafka | password.encoder.keyfactory.algorithm = null 09:48:57 kafka | password.encoder.old.secret = null 09:48:57 kafka | password.encoder.secret = null 09:48:57 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 09:48:57 kafka | process.roles = [] 09:48:57 kafka | producer.id.expiration.check.interval.ms = 600000 09:48:57 kafka | producer.id.expiration.ms = 86400000 09:48:57 kafka | producer.purgatory.purge.interval.requests = 1000 09:48:57 kafka | queued.max.request.bytes = -1 09:48:57 kafka | queued.max.requests = 500 09:48:57 kafka | quota.window.num = 11 09:48:57 kafka | quota.window.size.seconds = 1 09:48:57 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 09:48:57 kafka | remote.log.manager.task.interval.ms = 30000 09:48:57 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 09:48:57 kafka | remote.log.manager.task.retry.backoff.ms = 500 09:48:57 kafka | remote.log.manager.task.retry.jitter = 0.2 09:48:57 kafka | remote.log.manager.thread.pool.size = 10 09:48:57 kafka | remote.log.metadata.manager.class.name = null 09:48:57 kafka | remote.log.metadata.manager.class.path = null 09:48:57 kafka | remote.log.metadata.manager.impl.prefix = null 09:48:57 kafka | remote.log.metadata.manager.listener.name = null 09:48:57 kafka | remote.log.reader.max.pending.tasks = 100 09:48:57 kafka | remote.log.reader.threads = 10 09:48:57 kafka | remote.log.storage.manager.class.name = null 09:48:57 kafka | remote.log.storage.manager.class.path = null 09:48:57 kafka | remote.log.storage.manager.impl.prefix = null 09:48:57 kafka | remote.log.storage.system.enable = false 09:48:57 kafka | replica.fetch.backoff.ms = 1000 09:48:57 kafka | replica.fetch.max.bytes = 1048576 09:48:57 kafka | replica.fetch.min.bytes = 1 09:48:57 kafka | replica.fetch.response.max.bytes = 10485760 09:48:57 kafka | replica.fetch.wait.max.ms = 500 09:48:57 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 09:48:57 kafka | replica.lag.time.max.ms = 30000 09:48:57 kafka | replica.selector.class = null 09:48:57 kafka | replica.socket.receive.buffer.bytes = 65536 09:48:57 kafka | replica.socket.timeout.ms = 30000 09:48:57 kafka | replication.quota.window.num = 11 09:48:57 kafka | replication.quota.window.size.seconds = 1 09:48:57 kafka | request.timeout.ms = 30000 09:48:57 kafka | reserved.broker.max.id = 1000 09:48:57 kafka | sasl.client.callback.handler.class = null 09:48:57 kafka | sasl.enabled.mechanisms = [GSSAPI] 09:48:57 kafka | sasl.jaas.config = null 09:48:57 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:57 kafka | sasl.kerberos.min.time.before.relogin = 60000 09:48:57 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 09:48:57 kafka | sasl.kerberos.service.name = null 09:48:57 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:57 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:57 kafka | sasl.login.callback.handler.class = null 09:48:57 kafka | sasl.login.class = null 09:48:57 kafka | sasl.login.connect.timeout.ms = null 09:48:57 kafka | sasl.login.read.timeout.ms = null 09:48:57 kafka | sasl.login.refresh.buffer.seconds = 300 09:48:57 kafka | sasl.login.refresh.min.period.seconds = 60 09:48:57 kafka | sasl.login.refresh.window.factor = 0.8 09:48:57 kafka | sasl.login.refresh.window.jitter = 0.05 09:48:57 kafka | sasl.login.retry.backoff.max.ms = 10000 09:48:57 kafka | sasl.login.retry.backoff.ms = 100 09:48:57 kafka | sasl.mechanism.controller.protocol = GSSAPI 09:48:57 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 09:48:57 kafka | sasl.oauthbearer.clock.skew.seconds = 30 09:48:57 kafka | sasl.oauthbearer.expected.audience = null 09:48:57 kafka | sasl.oauthbearer.expected.issuer = null 09:48:57 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:57 kafka | sasl.oauthbearer.jwks.endpoint.url = null 09:48:57 kafka | sasl.oauthbearer.scope.claim.name = scope 09:48:57 kafka | sasl.oauthbearer.sub.claim.name = sub 09:48:57 kafka | sasl.oauthbearer.token.endpoint.url = null 09:48:57 kafka | sasl.server.callback.handler.class = null 09:48:57 kafka | sasl.server.max.receive.size = 524288 09:48:57 kafka | security.inter.broker.protocol = PLAINTEXT 09:48:57 kafka | security.providers = null 09:48:57 kafka | socket.connection.setup.timeout.max.ms = 30000 09:48:57 kafka | socket.connection.setup.timeout.ms = 10000 09:48:57 kafka | socket.listen.backlog.size = 50 09:48:57 kafka | socket.receive.buffer.bytes = 102400 09:48:57 kafka | socket.request.max.bytes = 104857600 09:48:57 kafka | socket.send.buffer.bytes = 102400 09:48:57 kafka | ssl.cipher.suites = [] 09:48:57 kafka | ssl.client.auth = none 09:48:57 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:57 kafka | ssl.endpoint.identification.algorithm = https 09:48:57 kafka | ssl.engine.factory.class = null 09:48:57 kafka | ssl.key.password = null 09:48:57 kafka | ssl.keymanager.algorithm = SunX509 09:48:57 kafka | ssl.keystore.certificate.chain = null 09:48:57 kafka | ssl.keystore.key = null 09:48:57 kafka | ssl.keystore.location = null 09:48:57 kafka | ssl.keystore.password = null 09:48:57 kafka | ssl.keystore.type = JKS 09:48:57 kafka | ssl.principal.mapping.rules = DEFAULT 09:48:57 kafka | ssl.protocol = TLSv1.3 09:48:57 kafka | ssl.provider = null 09:48:57 kafka | ssl.secure.random.implementation = null 09:48:57 kafka | ssl.trustmanager.algorithm = PKIX 09:48:57 kafka | ssl.truststore.certificates = null 09:48:57 kafka | ssl.truststore.location = null 09:48:57 kafka | ssl.truststore.password = null 09:48:57 kafka | ssl.truststore.type = JKS 09:48:57 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 09:48:57 kafka | transaction.max.timeout.ms = 900000 09:48:57 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 09:48:57 kafka | transaction.state.log.load.buffer.size = 5242880 09:48:57 kafka | transaction.state.log.min.isr = 2 09:48:57 kafka | transaction.state.log.num.partitions = 50 09:48:57 kafka | transaction.state.log.replication.factor = 3 09:48:57 kafka | transaction.state.log.segment.bytes = 104857600 09:48:57 kafka | transactional.id.expiration.ms = 604800000 09:48:57 kafka | unclean.leader.election.enable = false 09:48:57 kafka | zookeeper.clientCnxnSocket = null 09:48:57 kafka | zookeeper.connect = zookeeper:2181 09:48:57 kafka | zookeeper.connection.timeout.ms = null 09:48:57 kafka | zookeeper.max.in.flight.requests = 10 09:48:57 kafka | zookeeper.metadata.migration.enable = false 09:48:57 kafka | zookeeper.session.timeout.ms = 18000 09:48:57 kafka | zookeeper.set.acl = false 09:48:57 kafka | zookeeper.ssl.cipher.suites = null 09:48:57 kafka | zookeeper.ssl.client.enable = false 09:48:57 kafka | zookeeper.ssl.crl.enable = false 09:48:57 kafka | zookeeper.ssl.enabled.protocols = null 09:48:57 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 09:48:57 kafka | zookeeper.ssl.keystore.location = null 09:48:57 kafka | zookeeper.ssl.keystore.password = null 09:48:57 kafka | zookeeper.ssl.keystore.type = null 09:48:57 kafka | zookeeper.ssl.ocsp.enable = false 09:48:57 kafka | zookeeper.ssl.protocol = TLSv1.2 09:48:57 kafka | zookeeper.ssl.truststore.location = null 09:48:57 kafka | zookeeper.ssl.truststore.password = null 09:48:57 kafka | zookeeper.ssl.truststore.type = null 09:48:57 kafka | (kafka.server.KafkaConfig) 09:48:57 kafka | [2025-06-19 09:42:54,830] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:48:57 kafka | [2025-06-19 09:42:54,829] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:48:57 kafka | [2025-06-19 09:42:54,829] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:48:57 kafka | [2025-06-19 09:42:54,833] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:48:57 kafka | [2025-06-19 09:42:54,867] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:42:54,869] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:42:54,881] INFO Loaded 0 logs in 14ms. (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:42:54,881] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:42:54,884] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:42:54,897] INFO Starting the log cleaner (kafka.log.LogCleaner) 09:48:57 kafka | [2025-06-19 09:42:54,961] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 09:48:57 kafka | [2025-06-19 09:42:54,974] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 09:48:57 kafka | [2025-06-19 09:42:54,993] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 09:48:57 kafka | [2025-06-19 09:42:55,037] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 09:48:57 kafka | [2025-06-19 09:42:55,395] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:48:57 kafka | [2025-06-19 09:42:55,398] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 09:48:57 kafka | [2025-06-19 09:42:55,422] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 09:48:57 kafka | [2025-06-19 09:42:55,423] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:48:57 kafka | [2025-06-19 09:42:55,423] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 09:48:57 kafka | [2025-06-19 09:42:55,428] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 09:48:57 kafka | [2025-06-19 09:42:55,432] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 09:48:57 kafka | [2025-06-19 09:42:55,452] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,458] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,458] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,458] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,473] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 09:48:57 kafka | [2025-06-19 09:42:55,505] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 09:48:57 kafka | [2025-06-19 09:42:55,534] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750326175522,1750326175522,1,0,0,72057605732827137,258,0,27 09:48:57 kafka | (kafka.zk.KafkaZkClient) 09:48:57 kafka | [2025-06-19 09:42:55,536] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 09:48:57 kafka | [2025-06-19 09:42:55,607] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 09:48:57 kafka | [2025-06-19 09:42:55,618] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,623] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,624] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,633] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 09:48:57 kafka | [2025-06-19 09:42:55,642] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,647] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:42:55,648] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,652] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:42:55,655] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 09:48:57 kafka | [2025-06-19 09:42:55,676] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 09:48:57 kafka | [2025-06-19 09:42:55,683] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 09:48:57 kafka | [2025-06-19 09:42:55,683] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 09:48:57 kafka | [2025-06-19 09:42:55,691] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,692] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 09:48:57 kafka | [2025-06-19 09:42:55,698] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,702] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,705] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,720] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,726] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,729] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:48:57 kafka | [2025-06-19 09:42:55,732] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 09:48:57 kafka | [2025-06-19 09:42:55,741] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,742] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 09:48:57 kafka | [2025-06-19 09:42:55,742] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,743] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,743] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,746] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,747] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,747] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,748] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 09:48:57 kafka | [2025-06-19 09:42:55,749] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,752] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:42:55,758] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,758] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,769] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 09:48:57 kafka | [2025-06-19 09:42:55,770] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 09:48:57 kafka | [2025-06-19 09:42:55,770] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,770] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,771] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,771] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,774] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 09:48:57 kafka | [2025-06-19 09:42:55,774] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,782] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,783] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,783] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,784] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,785] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 09:48:57 kafka | [2025-06-19 09:42:55,785] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,810] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 09:48:57 kafka | [2025-06-19 09:42:55,810] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 09:48:57 kafka | [2025-06-19 09:42:55,810] INFO Kafka startTimeMs: 1750326175802 (org.apache.kafka.common.utils.AppInfoParser) 09:48:57 kafka | [2025-06-19 09:42:55,810] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:42:55,812] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 09:48:57 kafka | [2025-06-19 09:42:55,858] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:42:55,871] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:48:57 kafka | [2025-06-19 09:42:55,937] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:48:57 kafka | [2025-06-19 09:43:00,812] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:43:00,812] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:43:27,862] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:48:57 kafka | [2025-06-19 09:43:27,878] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:48:57 kafka | [2025-06-19 09:43:27,881] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:43:27,887] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:43:27,909] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(_EGxPVSHToun88wdUralMA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(SLPuTvShSYSLKtLkT53E1g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:43:27,910] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:43:27,912] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,913] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,914] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,919] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,920] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,921] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:27,921] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,051] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,052] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,056] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,057] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,058] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,060] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,061] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,061] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,061] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,061] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,061] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,061] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,062] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,063] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,069] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,072] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,110] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,111] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,112] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 09:48:57 kafka | [2025-06-19 09:43:28,112] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,176] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,187] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,188] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,189] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,190] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,204] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,205] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,205] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,205] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,205] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,213] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,214] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,214] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,214] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,214] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,222] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,222] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,222] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,222] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,223] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,239] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,240] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,240] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,240] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,240] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,250] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,251] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,251] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,251] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,251] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,259] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,259] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,259] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,260] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,260] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,271] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,272] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,272] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,272] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,272] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,284] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,285] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,285] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,285] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,286] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,295] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,296] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,296] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,296] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,296] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,306] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,307] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,307] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,307] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,307] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,317] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,318] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,318] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,318] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,318] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,328] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,329] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,329] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,329] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,329] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,339] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,340] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,340] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,340] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,340] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,367] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,369] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,369] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,369] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,369] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,377] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,378] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,378] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,378] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,378] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,390] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,392] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,392] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,392] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,392] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,402] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,403] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,403] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,403] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,403] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,414] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,415] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,415] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,415] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,416] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,424] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,425] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,425] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,425] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,425] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,435] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,436] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,436] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,436] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,436] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,447] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,448] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,448] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,448] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,449] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,456] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,457] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,457] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,457] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,457] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,466] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,467] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,467] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,467] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,467] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,477] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,478] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,478] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,478] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,478] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,487] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,488] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,488] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,488] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,488] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,496] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,497] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,497] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,498] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,498] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,508] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,509] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,509] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,509] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,509] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,521] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,521] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,521] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,522] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,522] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,532] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,532] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,532] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,532] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,532] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,541] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,542] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,542] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,542] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,542] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,552] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,552] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,552] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,552] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,553] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,560] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,560] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,560] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,561] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,561] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,570] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,570] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,570] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,570] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,570] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,581] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,581] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,581] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,582] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,582] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(_EGxPVSHToun88wdUralMA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,590] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,591] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,591] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,591] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,591] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,600] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,601] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,601] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,601] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,601] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,612] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,613] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,613] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,613] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,613] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,623] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,623] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,624] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,624] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,624] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,632] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,633] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,633] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,633] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,633] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,642] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,642] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,642] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,642] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,642] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,651] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,652] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,652] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,652] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,652] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,659] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,660] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,660] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,660] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,660] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,673] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,674] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,674] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,675] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,675] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,684] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,685] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,685] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,685] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,685] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,693] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,694] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,694] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,694] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,694] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,702] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,702] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,702] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,702] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,703] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,712] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,713] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,713] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,713] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,713] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,722] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,723] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,723] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,723] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,723] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,735] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,736] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,736] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,736] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,736] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,748] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:43:28,749] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:43:28,749] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,749] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:43:28,749] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(SLPuTvShSYSLKtLkT53E1g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,757] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,757] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,758] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,758] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,758] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,758] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,758] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,758] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,759] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,759] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,759] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,759] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,759] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,759] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,762] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,762] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,762] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,762] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,762] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,762] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,763] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,764] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,764] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,764] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,764] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,764] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,764] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,765] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,765] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,765] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,765] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,765] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,769] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,770] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,771] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,777] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,778] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,779] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,780] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:28,781] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,784] INFO [Broker id=1] Finished LeaderAndIsr request in 717ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,789] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,789] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,789] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:48:57 kafka | [2025-06-19 09:43:28,791] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=SLPuTvShSYSLKtLkT53E1g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=_EGxPVSHToun88wdUralMA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,799] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,800] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:28,801] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:43:29,585] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:29,597] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group be3aa0f9-34f3-4045-970c-8ec59634b69d in Empty state. Created a new member id consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:29,605] INFO [GroupCoordinator 1]: Preparing to rebalance group be3aa0f9-34f3-4045-970c-8ec59634b69d in state PreparingRebalance with old generation 0 (__consumer_offsets-9) (reason: Adding new member consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888 with group instance id None; client reason: need to re-join with the given member-id: consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:29,605] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:32,617] INFO [GroupCoordinator 1]: Stabilized group be3aa0f9-34f3-4045-970c-8ec59634b69d generation 1 (__consumer_offsets-9) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:32,622] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:32,639] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:43:32,640] INFO [GroupCoordinator 1]: Assignment received from leader consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888 for group be3aa0f9-34f3-4045-970c-8ec59634b69d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:44:13,171] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-84e74041-f0ed-47ee-af30-a1ef36f9280a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:44:13,173] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-84e74041-f0ed-47ee-af30-a1ef36f9280a with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:44:16,173] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:44:16,177] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-84e74041-f0ed-47ee-af30-a1ef36f9280a for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:45:23,884] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:48:57 kafka | [2025-06-19 09:45:23,899] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(hivvtuTxTFiN3icIS17U8Q),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:45:23,899] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:45:23,899] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,899] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,899] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,908] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,908] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,908] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,909] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,910] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,911] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,911] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,912] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,912] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 09:48:57 kafka | [2025-06-19 09:45:23,912] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,916] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:48:57 kafka | [2025-06-19 09:45:23,918] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 09:48:57 kafka | [2025-06-19 09:45:23,919] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:45:23,920] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 09:48:57 kafka | [2025-06-19 09:45:23,920] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(hivvtuTxTFiN3icIS17U8Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,925] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,927] INFO [Broker id=1] Finished LeaderAndIsr request in 16ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,928] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=hivvtuTxTFiN3icIS17U8Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,929] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,929] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:48:57 kafka | [2025-06-19 09:45:23,931] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:48:57 kafka | [2025-06-19 09:46:58,992] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-672841ff-914f-40a7-9ce5-16ac6a911832 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:46:58,993] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-672841ff-914f-40a7-9ce5-16ac6a911832 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:01,995] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:01,999] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-672841ff-914f-40a7-9ce5-16ac6a911832 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:02,121] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-672841ff-914f-40a7-9ce5-16ac6a911832 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:02,123] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:02,126] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-672841ff-914f-40a7-9ce5-16ac6a911832, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:24,920] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-b7cceaff-e218-47ea-a198-ddcab9444682 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:24,921] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-b7cceaff-e218-47ea-a198-ddcab9444682 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:27,922] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:27,926] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-b7cceaff-e218-47ea-a198-ddcab9444682 for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:27,932] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-b7cceaff-e218-47ea-a198-ddcab9444682 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:27,932] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:27,933] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-b7cceaff-e218-47ea-a198-ddcab9444682, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:50,541] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-114e7665-5ee8-4f4c-867b-5c3a8bbade8a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:50,543] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-114e7665-5ee8-4f4c-867b-5c3a8bbade8a with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:53,543] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:53,547] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-114e7665-5ee8-4f4c-867b-5c3a8bbade8a for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:53,555] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-114e7665-5ee8-4f4c-867b-5c3a8bbade8a on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:53,556] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:47:53,556] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-114e7665-5ee8-4f4c-867b-5c3a8bbade8a, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 09:48:57 kafka | [2025-06-19 09:48:00,815] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:48:00,816] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:48:00,821] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) 09:48:57 kafka | [2025-06-19 09:48:00,822] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) 09:48:57 policy-api | Waiting for policy-db-migrator port 6824... 09:48:57 policy-api | policy-db-migrator (172.17.0.5:6824) open 09:48:57 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 09:48:57 policy-api | 09:48:57 policy-api | . ____ _ __ _ _ 09:48:57 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:48:57 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:48:57 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:48:57 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 09:48:57 policy-api | =========|_|==============|___/=/_/_/_/ 09:48:57 policy-api | 09:48:57 policy-api | :: Spring Boot :: (v3.4.6) 09:48:57 policy-api | 09:48:57 policy-api | [2025-06-19T09:43:03.537+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 09:48:57 policy-api | [2025-06-19T09:43:03.656+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 45 (/app/api.jar started by policy in /opt/app/policy/api/bin) 09:48:57 policy-api | [2025-06-19T09:43:03.657+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 09:48:57 policy-api | [2025-06-19T09:43:05.336+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:48:57 policy-api | [2025-06-19T09:43:05.549+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 197 ms. Found 6 JPA repository interfaces. 09:48:57 policy-api | [2025-06-19T09:43:06.287+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 09:48:57 policy-api | [2025-06-19T09:43:06.302+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:48:57 policy-api | [2025-06-19T09:43:06.304+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:48:57 policy-api | [2025-06-19T09:43:06.305+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 09:48:57 policy-api | [2025-06-19T09:43:06.345+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 09:48:57 policy-api | [2025-06-19T09:43:06.345+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2611 ms 09:48:57 policy-api | [2025-06-19T09:43:06.683+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:48:57 policy-api | [2025-06-19T09:43:06.769+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 09:48:57 policy-api | [2025-06-19T09:43:06.818+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:48:57 policy-api | [2025-06-19T09:43:07.264+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:48:57 policy-api | [2025-06-19T09:43:07.310+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:48:57 policy-api | [2025-06-19T09:43:07.529+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@5d75f90e 09:48:57 policy-api | [2025-06-19T09:43:07.531+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:48:57 policy-api | [2025-06-19T09:43:07.622+00:00|INFO|pooling|main] HHH10001005: Database info: 09:48:57 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 09:48:57 policy-api | Database driver: undefined/unknown 09:48:57 policy-api | Database version: 16.4 09:48:57 policy-api | Autocommit mode: undefined/unknown 09:48:57 policy-api | Isolation level: undefined/unknown 09:48:57 policy-api | Minimum pool size: undefined/unknown 09:48:57 policy-api | Maximum pool size: undefined/unknown 09:48:57 policy-api | [2025-06-19T09:43:09.687+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:48:57 policy-api | [2025-06-19T09:43:09.691+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:48:57 policy-api | [2025-06-19T09:43:10.424+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 09:48:57 policy-api | [2025-06-19T09:43:11.408+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 09:48:57 policy-api | [2025-06-19T09:43:12.707+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:48:57 policy-api | [2025-06-19T09:43:12.753+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 09:48:57 policy-api | [2025-06-19T09:43:13.566+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 09:48:57 policy-api | [2025-06-19T09:43:13.782+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:48:57 policy-api | [2025-06-19T09:43:13.809+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 09:48:57 policy-api | [2025-06-19T09:43:13.835+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.016 seconds (process running for 11.67) 09:48:57 policy-api | [2025-06-19T09:43:39.920+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:48:57 policy-api | [2025-06-19T09:43:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 09:48:57 policy-api | [2025-06-19T09:43:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 09:48:57 policy-api | [2025-06-19T09:46:36.748+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: 09:48:57 policy-api | [] 09:48:57 policy-api | [2025-06-19T09:47:53.931+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID 09:48:57 policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity 09:48:57 policy-api | 09:48:57 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 09:48:57 policy-csit | Run Robot test 09:48:57 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 09:48:57 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 09:48:57 policy-csit | -v POLICY_API_IP:policy-api:6969 09:48:57 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 09:48:57 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 09:48:57 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 09:48:57 policy-csit | -v APEX_IP:policy-apex-pdp:6969 09:48:57 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 09:48:57 policy-csit | -v KAFKA_IP:kafka:9092 09:48:57 policy-csit | -v PROMETHEUS_IP:prometheus:9090 09:48:57 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 09:48:57 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 09:48:57 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 09:48:57 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 09:48:57 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 09:48:57 policy-csit | -v TEMP_FOLDER:/tmp/distribution 09:48:57 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 09:48:57 policy-csit | -v TEST_ENV:docker 09:48:57 policy-csit | -v JAEGER_IP:jaeger:16686 09:48:57 policy-csit | Starting Robot test suites ... 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidatesZonePolicy | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidatesVehiclePolicy | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidatesAbacPolicy | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 09:48:57 policy-csit | 5 tests, 5 passed, 0 failed 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 09:48:57 policy-csit | ------------------------------------------------------------------------------ 09:48:57 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 09:48:57 policy-csit | 5 tests, 5 passed, 0 failed 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 09:48:57 policy-csit | 10 tests, 10 passed, 0 failed 09:48:57 policy-csit | ============================================================================== 09:48:57 policy-csit | Output: /tmp/results/output.xml 09:48:57 policy-csit | Log: /tmp/results/log.html 09:48:57 policy-csit | Report: /tmp/results/report.html 09:48:57 policy-csit | RESULT: 0 09:48:58 policy-db-migrator | Waiting for postgres port 5432... 09:48:58 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 09:48:58 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 09:48:58 policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused 09:48:58 policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! 09:48:58 policy-db-migrator | Initializing policyadmin... 09:48:58 policy-db-migrator | 321 blocks 09:48:58 policy-db-migrator | Preparing upgrade release version: 0800 09:48:58 policy-db-migrator | Preparing upgrade release version: 0900 09:48:58 policy-db-migrator | Preparing upgrade release version: 1000 09:48:58 policy-db-migrator | Preparing upgrade release version: 1100 09:48:58 policy-db-migrator | Preparing upgrade release version: 1200 09:48:58 policy-db-migrator | Preparing upgrade release version: 1300 09:48:58 policy-db-migrator | Done 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | -------------+--------- 09:48:58 policy-db-migrator | policyadmin | 0 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:48:58 policy-db-migrator | (0 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | upgrade: 0 -> 1300 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0450-pdpgroup.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0470-pdp.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0570-toscadatatype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0630-toscanodetype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0660-toscaparameter.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0670-toscapolicies.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0690-toscapolicy.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0730-toscaproperty.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0770-toscarequirement.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0780-toscarequirements.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0820-toscatrigger.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-pdp.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0210-sequence.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0220-sequence.sql 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0120-toscatrigger.sql 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0140-toscaparameter.sql 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0150-toscaproperty.sql 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-upgrade.sql 09:48:58 policy-db-migrator | msg 09:48:58 policy-db-migrator | --------------------------- 09:48:58 policy-db-migrator | upgrade to 1100 completed 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:48:58 policy-db-migrator | DROP INDEX 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0120-audit_sequence.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 09:48:58 policy-db-migrator | DROP TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | policyadmin: OK: upgrade (1300) 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | -------------+--------- 09:48:58 policy-db-migrator | policyadmin | 1300 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:48:58 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:46.717549 09:48:58 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:46.769765 09:48:58 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:46.825354 09:48:58 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:46.881014 09:48:58 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:46.943532 09:48:58 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.000574 09:48:58 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.058507 09:48:58 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.10752 09:48:58 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.163868 09:48:58 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.218583 09:48:58 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.268685 09:48:58 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.317017 09:48:58 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.371192 09:48:58 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.422763 09:48:58 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.476331 09:48:58 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.53494 09:48:58 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.593862 09:48:58 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.640343 09:48:58 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.701825 09:48:58 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.758958 09:48:58 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.8083 09:48:58 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.863564 09:48:58 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.916754 09:48:58 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:47.971214 09:48:58 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.026684 09:48:58 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.083549 09:48:58 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.149996 09:48:58 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.207941 09:48:58 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.259981 09:48:58 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.316768 09:48:58 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.368857 09:48:58 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.440211 09:48:58 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.501508 09:48:58 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.55788 09:48:58 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.615166 09:48:58 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.673979 09:48:58 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.734201 09:48:58 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.790937 09:48:58 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.851805 09:48:58 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.911171 09:48:58 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:48.972328 09:48:58 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.032385 09:48:58 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.086395 09:48:58 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.143806 09:48:58 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.203062 09:48:58 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.255504 09:48:58 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.322153 09:48:58 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.383683 09:48:58 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.444007 09:48:58 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.50427 09:48:58 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.566997 09:48:58 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.628959 09:48:58 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.683983 09:48:58 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.739765 09:48:58 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.797034 09:48:58 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.849125 09:48:58 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.89676 09:48:58 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:49.944828 09:48:58 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.00255 09:48:58 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.061017 09:48:58 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.113034 09:48:58 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.231624 09:48:58 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.364762 09:48:58 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.641299 09:48:58 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.750743 09:48:58 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:50.994877 09:48:58 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:51.121995 09:48:58 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:51.356447 09:48:58 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:51.622265 09:48:58 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:51.714416 09:48:58 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:52.153348 09:48:58 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:52.240216 09:48:58 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:52.321085 09:48:58 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:52.403011 09:48:58 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:52.641852 09:48:58 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:53.055873 09:48:58 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:53.584401 09:48:58 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:53.714874 09:48:58 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.089856 09:48:58 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.486092 09:48:58 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.629593 09:48:58 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.687748 09:48:58 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.748339 09:48:58 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.803188 09:48:58 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.858586 09:48:58 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.915574 09:48:58 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:54.97296 09:48:58 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.034582 09:48:58 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.084571 09:48:58 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.139851 09:48:58 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.200202 09:48:58 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.263712 09:48:58 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.31557 09:48:58 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.366283 09:48:58 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.422196 09:48:58 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906250942460800u | 1 | 2025-06-19 09:42:55.47325 09:48:58 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.52822 09:48:58 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.590259 09:48:58 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.643064 09:48:58 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.690686 09:48:58 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.746115 09:48:58 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.806659 09:48:58 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.861874 09:48:58 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.92089 09:48:58 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:55.975139 09:48:58 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:56.031727 09:48:58 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:56.093594 09:48:58 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:56.159124 09:48:58 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1906250942460900u | 1 | 2025-06-19 09:42:56.203354 09:48:58 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.24755 09:48:58 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.30681 09:48:58 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.360364 09:48:58 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.424831 09:48:58 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.479918 09:48:58 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.539351 09:48:58 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.602778 09:48:58 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.680485 09:48:58 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1906250942461000u | 1 | 2025-06-19 09:42:56.735756 09:48:58 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1906250942461100u | 1 | 2025-06-19 09:42:56.786166 09:48:58 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1906250942461200u | 1 | 2025-06-19 09:42:56.842444 09:48:58 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1906250942461200u | 1 | 2025-06-19 09:42:56.903609 09:48:58 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1906250942461200u | 1 | 2025-06-19 09:42:56.968221 09:48:58 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1906250942461200u | 1 | 2025-06-19 09:42:57.030638 09:48:58 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1906250942461300u | 1 | 2025-06-19 09:42:57.083053 09:48:58 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1906250942461300u | 1 | 2025-06-19 09:42:57.135891 09:48:58 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1906250942461300u | 1 | 2025-06-19 09:42:57.193255 09:48:58 policy-db-migrator | (126 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | policyadmin: OK @ 1300 09:48:58 policy-db-migrator | Initializing clampacm... 09:48:58 policy-db-migrator | 97 blocks 09:48:58 policy-db-migrator | Preparing upgrade release version: 1400 09:48:58 policy-db-migrator | Preparing upgrade release version: 1500 09:48:58 policy-db-migrator | Preparing upgrade release version: 1600 09:48:58 policy-db-migrator | Preparing upgrade release version: 1601 09:48:58 policy-db-migrator | Preparing upgrade release version: 1700 09:48:58 policy-db-migrator | Preparing upgrade release version: 1701 09:48:58 policy-db-migrator | Done 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | ----------+--------- 09:48:58 policy-db-migrator | clampacm | 0 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:48:58 policy-db-migrator | (0 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | upgrade: 0 -> 1701 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0500-participant.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0300-participantreplica.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0400-participant.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-automationcomposition.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-message.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-messagejob.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0200-automationcomposition.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0800-participantreplica.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | UPDATE 0 09:48:58 policy-db-migrator | ALTER TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | clampacm: OK: upgrade (1701) 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | ----------+--------- 09:48:58 policy-db-migrator | clampacm | 1701 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:48:58 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:57.886833 09:48:58 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:57.948528 09:48:58 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.004537 09:48:58 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.063289 09:48:58 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.121468 09:48:58 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.192306 09:48:58 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.25113 09:48:58 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.306588 09:48:58 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.364247 09:48:58 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.419873 09:48:58 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.476148 09:48:58 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.531406 09:48:58 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1906250942571400u | 1 | 2025-06-19 09:42:58.583784 09:48:58 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.638093 09:48:58 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.689611 09:48:58 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.751371 09:48:58 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.8047 09:48:58 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.86107 09:48:58 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.916751 09:48:58 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:58.967827 09:48:58 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1906250942571500u | 1 | 2025-06-19 09:42:59.019741 09:48:58 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1906250942571600u | 1 | 2025-06-19 09:42:59.072147 09:48:58 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1906250942571600u | 1 | 2025-06-19 09:42:59.124502 09:48:58 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1906250942571601u | 1 | 2025-06-19 09:42:59.17586 09:48:58 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1906250942571601u | 1 | 2025-06-19 09:42:59.22744 09:48:58 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1906250942571700u | 1 | 2025-06-19 09:42:59.282542 09:48:58 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1906250942571700u | 1 | 2025-06-19 09:42:59.343389 09:48:58 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1906250942571700u | 1 | 2025-06-19 09:42:59.396793 09:48:58 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.45891 09:48:58 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.516803 09:48:58 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.570365 09:48:58 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.627668 09:48:58 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.682916 09:48:58 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.740909 09:48:58 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.789015 09:48:58 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.842677 09:48:58 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1906250942571701u | 1 | 2025-06-19 09:42:59.897601 09:48:58 policy-db-migrator | (37 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | clampacm: OK @ 1701 09:48:58 policy-db-migrator | Initializing pooling... 09:48:58 policy-db-migrator | 4 blocks 09:48:58 policy-db-migrator | Preparing upgrade release version: 1600 09:48:58 policy-db-migrator | Done 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | ---------+--------- 09:48:58 policy-db-migrator | pooling | 0 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:48:58 policy-db-migrator | (0 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | pooling: upgrade available: 0 -> 1600 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | upgrade: 0 -> 1600 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-distributed.locking.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | pooling: OK: upgrade (1600) 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | ---------+--------- 09:48:58 policy-db-migrator | pooling | 1600 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:48:58 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1906250943001600u | 1 | 2025-06-19 09:43:00.611444 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | pooling: OK @ 1600 09:48:58 policy-db-migrator | Initializing operationshistory... 09:48:58 policy-db-migrator | 6 blocks 09:48:58 policy-db-migrator | Preparing upgrade release version: 1600 09:48:58 policy-db-migrator | Done 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | -------------------+--------- 09:48:58 policy-db-migrator | operationshistory | 0 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 09:48:58 policy-db-migrator | (0 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | upgrade: 0 -> 1600 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | rc=0 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | > upgrade 0110-operationshistory.sql 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | CREATE INDEX 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | INSERT 0 1 09:48:58 policy-db-migrator | operationshistory: OK: upgrade (1600) 09:48:58 policy-db-migrator | List of databases 09:48:58 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 09:48:58 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 09:48:58 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 09:48:58 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 09:48:58 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 09:48:58 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 09:48:58 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 09:48:58 policy-db-migrator | (9 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 09:48:58 policy-db-migrator | CREATE TABLE 09:48:58 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 09:48:58 policy-db-migrator | name | version 09:48:58 policy-db-migrator | -------------------+--------- 09:48:58 policy-db-migrator | operationshistory | 1600 09:48:58 policy-db-migrator | (1 row) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 09:48:58 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 09:48:58 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1906250943011600u | 1 | 2025-06-19 09:43:01.316096 09:48:58 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1906250943011600u | 1 | 2025-06-19 09:43:01.387204 09:48:58 policy-db-migrator | (2 rows) 09:48:58 policy-db-migrator | 09:48:58 policy-db-migrator | operationshistory: OK @ 1600 09:48:58 policy-opa-pdp | Waiting for kafka port 9092... 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | Connection to kafka (172.17.0.8) 9092 port [tcp/*] succeeded! 09:48:58 policy-opa-pdp | Waiting for pap port 6969... 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 09:48:58 policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=debug msg="###################################### " 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=debug msg="OPA-PDP: Starting initialisation " 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=debug msg="###################################### " 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="KAFKA_URL not defined, using default value" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="PAP_TOPIC not defined, using default value" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="PATCH_TOPIC not defined, using default value" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="PATCH_GROUPID not defined, using default value" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="API_USER not defined, using default value" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="API_PASSWORD not defined, using default value" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="UseSASLForKAFKA not defined, using default value" 09:48:58 policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=debug msg="Username: " 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=debug msg="Password: " 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" 09:48:58 policy-opa-pdp | time="2025-06-19T09:44:08Z" level=debug msg="Configuration module: environment initialised" 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:08.1472+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:08.1481+00:00] Name: opa-56bc6029-e683-4320-a6d7-f0316897aa5b 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:08.1513+00:00] Starting OPA PDP Service 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:44:13.1529+00:00] HTTP server started 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:13.1541+00:00] Create an instance of OPA Object 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:13.1542+00:00] Configure an instance of OPA Object 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:13.1573+00:00] Topic start :::: policy-pdp-pap 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:13.1574+00:00] Creating Kafka Consumer singleton instance 09:48:58 policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-19T09:44:13.1589+00:00] Topic Subscribed: policy-pdp-pap 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:13.1589+00:00] Created SIngleton consumer instance 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:13.1634+00:00] Starting PDP Message Listener..... 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:23.1647+00:00] New Ticker started with interval 60000 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:44:33.1658+00:00] After registration successful delay 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.1671+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"5853735e-618a-4254-a15c-a21b2acbf8b4","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750326323166","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.1672+00:00] Sending Heartbeat ... 09:48:58 policy-opa-pdp | 2025/06/19 09:45:23 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.1938+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"5853735e-618a-4254-a15c-a21b2acbf8b4","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750326323166","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.1939+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.1939+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8166+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","timestampMs":1750326323733,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8169+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8174+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","timestampMs":1750326323733,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8175+00:00] Policy Is Allowed: slice.capacity.check 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8175+00:00] Validating properties data for policy: slice.capacity.check 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8175+00:00] Validating properties policy for policy: slice.capacity.check 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8175+00:00] Validation successful for policy: slice.capacity.check 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8185+00:00] Directory created: /opt/policies/slice/capacity/check 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8186+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8189+00:00] Directory created: /opt/data/node/slice/capacity/check 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8189+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8190+00:00] Before calling combinedoutput 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8407+00:00] Bundle Built Sucessfully.... 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8427+00:00] storage not found creating : /node 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8427+00:00] storage not found creating : /node/slice 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8427+00:00] storage not found creating : /node/slice/capacity 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8427+00:00] storage not found creating : /node/slice/capacity/check 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8428+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8428+00:00] Loaded Policy: slice.capacity.check 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8428+00:00] Processed policies_to_be_deployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8429+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:45:23 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8429+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"93be3da5-9e97-4334-a765-b0ca6ed3a779","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323842","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8429+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8430+00:00] 120000 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8431+00:00] New Ticker started with interval 120000 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8517+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"93be3da5-9e97-4334-a765-b0ca6ed3a779","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323842","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8518+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8518+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8808+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"76aa2592-933e-4236-ab45-ef442a6da711","timestampMs":1750326323734,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8809+00:00] messageType: PDP_STATE_CHANGE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8814+00:00] PDP STATE CHANGE message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"76aa2592-933e-4236-ab45-ef442a6da711","timestampMs":1750326323734,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8815+00:00] State change from PASSIVE To : ACTIVE 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8816+00:00] Sending PDP Status With State Change response 09:48:58 policy-opa-pdp | 2025/06/19 09:45:23 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8819+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"76aa2592-933e-4236-ab45-ef442a6da711","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"26795b56-69ea-4882-b31e-97b1af122c5e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323881","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:23.8819+00:00] PDP_STATUS With State Change Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8915+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"76aa2592-933e-4236-ab45-ef442a6da711","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"26795b56-69ea-4882-b31e-97b1af122c5e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323881","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8916+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:23.8916+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2561+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","timestampMs":1750326324237,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2562+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2564+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","timestampMs":1750326324237,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:24.2564+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:45:24 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2565+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"df2d7b1f-2f76-4bf5-a4e0-54cd5aa15cb1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326324256","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:45:24.2565+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2565+00:00] 120000 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2648+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"df2d7b1f-2f76-4bf5-a4e0-54cd5aa15cb1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326324256","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2648+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:45:24.2649+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | 2025/06/19 09:46:23 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:23.1662+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"53ce1623-3a2d-4f45-b150-4b3dfd3370d6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326383165","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:23.1667+00:00] Sending Heartbeat ... 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:23.1755+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"53ce1623-3a2d-4f45-b150-4b3dfd3370d6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326383165","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:23.1756+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:23.1756+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | WARN[2025-06-19T09:46:36.4616+00:00] Invalid or Missing Request ID 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:36.4641+00:00] Received Health Check message 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:36.4732+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:36.4733+00:00] datapath to get Data : / 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:36.4735+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9515+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","timestampMs":1750326397902,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9516+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9517+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","timestampMs":1750326397902,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9518+00:00] Check if Policy is Already Deployed: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9518+00:00] Policy is new and should be deployed: zoneB 1.0.6 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9518+00:00] Policy Is Allowed: zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9518+00:00] Validating properties data for policy: zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9518+00:00] Validating properties policy for policy: zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9518+00:00] Validation successful for policy: zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9519+00:00] Directory created: /opt/policies/zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9520+00:00] Policy file saved: /opt/policies/zoneB/policy.rego 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9520+00:00] Directory created: /opt/data/node/zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9521+00:00] Data file saved: /opt/data/node/zoneB/data.json 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9521+00:00] Before calling combinedoutput 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9870+00:00] Bundle Built Sucessfully.... 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9924+00:00] storage not found creating : /node/zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9926+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.zoneB" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "zoneB" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "zoneB", 09:48:58 policy-opa-pdp | "policy-version": "1.0.6" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9926+00:00] Loaded Policy: zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9926+00:00] Processed policies_to_be_deployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9927+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:46:37 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9928+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3a4e2f6a-7154-441c-b821-2775e96189c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326397992","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:46:37.9928+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:37.9928+00:00] 0 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:38.0016+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3a4e2f6a-7154-441c-b821-2775e96189c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326397992","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:38.0017+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:46:38.0017+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.1508+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1509+00:00] datapath to get Data : /node/zoneB/zone 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1510+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1625+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1626+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1631+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1632+00:00] SDK making a decision 09:48:58 policy-opa-pdp | {"decision_id":"5d252096-fb4d-449f-87c7-a2ce47eaa715","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"ef72ff35-4194-46e7-b48c-ea86b3d870a5","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1010,"timer_rego_query_compile_ns":216525,"timer_rego_query_eval_ns":1488923,"timer_rego_query_parse_ns":161993,"timer_sdk_decision_eval_ns":2123337},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-19T09:47:02Z","timestamp":"2025-06-19T09:47:02.163342959Z","type":"openpolicyagent.org/decision_logs"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1663+00:00] RAW opa Decision output: 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "ID": "5d252096-fb4d-449f-87c7-a2ce47eaa715", 09:48:58 policy-opa-pdp | "Result": { 09:48:58 policy-opa-pdp | "action_is_log_view": true, 09:48:58 policy-opa-pdp | "allow": true, 09:48:58 policy-opa-pdp | "has_zone_access": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "access": "granted", 09:48:58 policy-opa-pdp | "user": "user1" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | "Provenance": { 09:48:58 policy-opa-pdp | "version": "1.1.0", 09:48:58 policy-opa-pdp | "build_commit": "", 09:48:58 policy-opa-pdp | "build_timestamp": "", 09:48:58 policy-opa-pdp | "build_hostname": "" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1749+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1750+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1756+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | WARN[2025-06-19T09:47:02.1759+00:00] Policy Name zoeB does not exist 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1828+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1828+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1833+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1836+00:00] SDK making a decision 09:48:58 policy-opa-pdp | {"decision_id":"2f211cfe-bfd0-4091-a8c3-30a3047210cd","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"ef72ff35-4194-46e7-b48c-ea86b3d870a5","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1310,"timer_rego_query_eval_ns":734295,"timer_sdk_decision_eval_ns":1051744},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-19T09:47:02Z","timestamp":"2025-06-19T09:47:02.183776107Z","type":"openpolicyagent.org/decision_logs"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.1850+00:00] RAW opa Decision output: 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "ID": "2f211cfe-bfd0-4091-a8c3-30a3047210cd", 09:48:58 policy-opa-pdp | "Result": { 09:48:58 policy-opa-pdp | "action_is_log_view": true, 09:48:58 policy-opa-pdp | "allow": true, 09:48:58 policy-opa-pdp | "has_zone_access": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "access": "granted", 09:48:58 policy-opa-pdp | "user": "user1" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | "Provenance": { 09:48:58 policy-opa-pdp | "version": "1.1.0", 09:48:58 policy-opa-pdp | "build_commit": "", 09:48:58 policy-opa-pdp | "build_timestamp": "", 09:48:58 policy-opa-pdp | "build_hostname": "" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5723+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"763b7dfc-1e2e-45d1-a073-925b1118661f","timestampMs":1750326422531,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5724+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5726+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"763b7dfc-1e2e-45d1-a073-925b1118661f","timestampMs":1750326422531,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.5726+00:00] Found Policies to be undeployed 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.5726+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5726+00:00] Deleting Policy from OPA : /zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5757+00:00] Removing policy directory: /opt/policies/zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5760+00:00] Deleting data from OPA : /node/zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5761+00:00] Analyzing dataPath: /node/zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5761+00:00] Path segments: [ node zoneB] 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5761+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5761+00:00] Removing data directory: /opt/data/node/zoneB 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.5764+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5764+00:00] Policies Map After Undeployment : { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.5764+00:00] Processed policies_to_be_undeployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.5766+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:47:02 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5768+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"763b7dfc-1e2e-45d1-a073-925b1118661f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"418427c8-d212-4917-b5d5-b7a57fe75342","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326422576","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:02.5768+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5769+00:00] 0 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5845+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"763b7dfc-1e2e-45d1-a073-925b1118661f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"418427c8-d212-4917-b5d5-b7a57fe75342","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326422576","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5845+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:02.5845+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8761+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","timestampMs":1750326423851,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8763+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8764+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","timestampMs":1750326423851,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8765+00:00] Check if Policy is Already Deployed: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.8765+00:00] Policy is new and should be deployed: vehicle 1.0.6 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8765+00:00] Policy Is Allowed: vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8765+00:00] Validating properties data for policy: vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8765+00:00] Validating properties policy for policy: vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.8766+00:00] Validation successful for policy: vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.8768+00:00] Directory created: /opt/policies/vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.8769+00:00] Policy file saved: /opt/policies/vehicle/policy.rego 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.8770+00:00] Directory created: /opt/data/node/vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.8770+00:00] Data file saved: /opt/data/node/vehicle/data.json 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.8770+00:00] Before calling combinedoutput 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9043+00:00] Bundle Built Sucessfully.... 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9077+00:00] storage not found creating : /node/vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.9078+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.vehicle" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "vehicle" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "vehicle", 09:48:58 policy-opa-pdp | 2025/06/19 09:47:03 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | "policy-version": "1.0.6" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9078+00:00] Loaded Policy: vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.9078+00:00] Processed policies_to_be_deployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.9079+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9080+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"0fc73e43-24dc-4686-8cd5-4f45f3919885","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326423907","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:03.9080+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9080+00:00] 0 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9192+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"0fc73e43-24dc-4686-8cd5-4f45f3919885","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326423907","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9194+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:03.9195+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:23.8461+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"8a83170a-7682-4fb2-8f36-47c4206d4590","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326443845","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:23.8462+00:00] Sending Heartbeat ... 09:48:58 policy-opa-pdp | 2025/06/19 09:47:23 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:23.8555+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"8a83170a-7682-4fb2-8f36-47c4206d4590","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326443845","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:23.8556+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:23.8556+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9574+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9575+00:00] datapath to get Data : /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9577+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9704+00:00] PDP received a request to update data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9709+00:00] All fields are valid! 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9709+00:00] data : [map[op:add path:/round value:trail]] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9710+00:00] policy name : vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9711+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9711+00:00] dirParts : [ node vehicle] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9715+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9715+00:00] root: /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9715+00:00] path : round 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9716+00:00] calling ParsePatchPathEscaped to check the path 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9717+00:00] No path conflicts detected 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9718+00:00] Updated the data in the corresponding path successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9805+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9806+00:00] datapath to get Data : /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9809+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9903+00:00] PDP received a request to update data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9906+00:00] All fields are valid! 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9907+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9907+00:00] policy name : vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9907+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9908+00:00] dirParts : [ node vehicle] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9908+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9908+00:00] root: /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9908+00:00] path : round 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9908+00:00] calling ParsePatchPathEscaped to check the path 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9909+00:00] No path conflicts detected 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9909+00:00] Updated the data in the corresponding path successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:27.9976+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9976+00:00] datapath to get Data : /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:27.9977+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0074+00:00] PDP received a request to update data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0078+00:00] All fields are valid! 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0079+00:00] data : [map[op:remove path:/round]] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0081+00:00] policy name : vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0083+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0083+00:00] dirParts : [ node vehicle] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0085+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0086+00:00] root: /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0087+00:00] path : round 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0088+00:00] calling ParsePatchPathEscaped to check the path 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0090+00:00] No path conflicts detected 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0091+00:00] Updated the data in the corresponding path successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.0157+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0158+00:00] datapath to get Data : /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0160+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0254+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0255+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0257+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0257+00:00] SDK making a decision 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0271+00:00] RAW opa Decision output: 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "ID": "b9fa770c-d6fc-43fa-a441-b94174993cbb", 09:48:58 policy-opa-pdp | "Result": { 09:48:58 policy-opa-pdp | "action_is_granted": true, 09:48:58 policy-opa-pdp | "allow": true, 09:48:58 policy-opa-pdp | "user_has_vehicle_access": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "status": "available", 09:48:58 policy-opa-pdp | "type": "car" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | "Provenance": { 09:48:58 policy-opa-pdp | "version": "1.1.0", 09:48:58 policy-opa-pdp | "build_commit": "", 09:48:58 policy-opa-pdp | "build_timestamp": "", 09:48:58 policy-opa-pdp | "build_hostname": "" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | {"decision_id":"b9fa770c-d6fc-43fa-a441-b94174993cbb","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"ef72ff35-4194-46e7-b48c-ea86b3d870a5","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":720,"timer_rego_query_compile_ns":144933,"timer_rego_query_eval_ns":410079,"timer_rego_query_parse_ns":174044,"timer_sdk_decision_eval_ns":895151},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-19T09:47:28Z","timestamp":"2025-06-19T09:47:28.025812691Z","type":"openpolicyagent.org/decision_logs"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0346+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0347+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0350+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | WARN[2025-06-19T09:47:28.0353+00:00] Policy Name vehile does not exist 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0423+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0423+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0426+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0427+00:00] SDK making a decision 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.0438+00:00] RAW opa Decision output: 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "ID": "1c9c6a34-52fc-403b-863d-f3214b97f7e3", 09:48:58 policy-opa-pdp | "Result": { 09:48:58 policy-opa-pdp | "action_is_granted": true, 09:48:58 policy-opa-pdp | "allow": true, 09:48:58 policy-opa-pdp | "user_has_vehicle_access": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "status": "available", 09:48:58 policy-opa-pdp | "type": "car" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | {"decision_id":"1c9c6a34-52fc-403b-863d-f3214b97f7e3","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"ef72ff35-4194-46e7-b48c-ea86b3d870a5","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1170,"timer_rego_query_eval_ns":447310,"timer_sdk_decision_eval_ns":532571},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-19T09:47:28Z","timestamp":"2025-06-19T09:47:28.042789653Z","type":"openpolicyagent.org/decision_logs"} 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | "Provenance": { 09:48:58 policy-opa-pdp | "version": "1.1.0", 09:48:58 policy-opa-pdp | "build_commit": "", 09:48:58 policy-opa-pdp | "build_timestamp": "", 09:48:58 policy-opa-pdp | "build_hostname": "" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3479+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"01008446-0225-4952-be3e-34088d5cf19c","timestampMs":1750326448326,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3480+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3483+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"01008446-0225-4952-be3e-34088d5cf19c","timestampMs":1750326448326,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.3484+00:00] Found Policies to be undeployed 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.3484+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3484+00:00] Deleting Policy from OPA : /vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3508+00:00] Removing policy directory: /opt/policies/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3520+00:00] Deleting data from OPA : /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3520+00:00] Analyzing dataPath: /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3521+00:00] Path segments: [ node vehicle] 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3521+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3521+00:00] Removing data directory: /opt/data/node/vehicle 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.3523+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3523+00:00] Policies Map After Undeployment : { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.3524+00:00] Processed policies_to_be_undeployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.3524+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:47:28 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3527+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"01008446-0225-4952-be3e-34088d5cf19c","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3b25c57f-4c00-4728-a0ce-d928cf43c0ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326448352","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.3527+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3527+00:00] 0 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3599+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"01008446-0225-4952-be3e-34088d5cf19c","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3b25c57f-4c00-4728-a0ce-d928cf43c0ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326448352","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3600+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.3601+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.7557+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.7557+00:00] datapath to get Data : /node/vehicle 09:48:58 policy-opa-pdp | WARN[2025-06-19T09:47:28.7558+00:00] Error in reading data under /node/vehicle path 09:48:58 policy-opa-pdp | ERRO[2025-06-19T09:47:28.7558+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.7671+00:00] PDP received a request to update data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.7674+00:00] All fields are valid! 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.7675+00:00] data : [map[op:remove path:/round]] 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:28.7675+00:00] policy name : vehicle 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:28.7676+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] 09:48:58 policy-opa-pdp | ERRO[2025-06-19T09:47:28.7677+00:00] Policy associated with the patch request does not exists 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5024+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c64624c3-c637-4201-88e3-7b3627bbd0fb","timestampMs":1750326449481,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5026+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5028+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c64624c3-c637-4201-88e3-7b3627bbd0fb","timestampMs":1750326449481,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5028+00:00] Check if Policy is Already Deployed: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5030+00:00] Policy is new and should be deployed: abac 1.0.7 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5031+00:00] Policy Is Allowed: abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5031+00:00] Validating properties data for policy: abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5031+00:00] Validating properties policy for policy: abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5032+00:00] Validation successful for policy: abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5033+00:00] Directory created: /opt/policies/abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5037+00:00] Policy file saved: /opt/policies/abac/policy.rego 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5039+00:00] Directory created: /opt/data/node/abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5040+00:00] Data file saved: /opt/data/node/abac/data.json 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5040+00:00] Before calling combinedoutput 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5279+00:00] Bundle Built Sucessfully.... 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5310+00:00] storage not found creating : /node/abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5313+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.abac" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "abac" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "abac", 09:48:58 policy-opa-pdp | "policy-version": "1.0.7" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5313+00:00] Loaded Policy: abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5314+00:00] Processed policies_to_be_deployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5315+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:47:29 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5317+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c64624c3-c637-4201-88e3-7b3627bbd0fb","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"c31760b8-8009-4602-beb8-944ab8c69ba1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326449531","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:29.5318+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5318+00:00] 0 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5407+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c64624c3-c637-4201-88e3-7b3627bbd0fb","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"c31760b8-8009-4602-beb8-944ab8c69ba1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326449531","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5424+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:29.5424+00:00] discarding event of type PDP_STATUS 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:53.5809+00:00] PDP received a request to get data through API 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5810+00:00] datapath to get Data : /node/abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5812+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5912+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5913+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5915+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5915+00:00] SDK making a decision 09:48:58 policy-opa-pdp | {"decision_id":"f0461c18-ee43-48fd-a0cb-5c40106fa19e","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"ef72ff35-4194-46e7-b48c-ea86b3d870a5","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":470,"timer_rego_query_compile_ns":92942,"timer_rego_query_eval_ns":539742,"timer_rego_query_parse_ns":68782,"timer_sdk_decision_eval_ns":827119},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-19T09:47:53Z","timestamp":"2025-06-19T09:47:53.591560344Z","type":"openpolicyagent.org/decision_logs"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.5926+00:00] RAW opa Decision output: 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "ID": "f0461c18-ee43-48fd-a0cb-5c40106fa19e", 09:48:58 policy-opa-pdp | "Result": { 09:48:58 policy-opa-pdp | "action_is_read": true, 09:48:58 policy-opa-pdp | "allow": true, 09:48:58 policy-opa-pdp | "viewable_sensor_data": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Galle", 09:48:58 policy-opa-pdp | "precipitation": "500 mm", 09:48:58 policy-opa-pdp | "temperature": "35 C", 09:48:58 policy-opa-pdp | "windspeed": "7.2 m/s" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Jaffna", 09:48:58 policy-opa-pdp | "precipitation": "300 mm", 09:48:58 policy-opa-pdp | "temperature": "-5 C", 09:48:58 policy-opa-pdp | "windspeed": "3.8 m/s" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Nuwara Eliya", 09:48:58 policy-opa-pdp | "precipitation": "600 mm", 09:48:58 policy-opa-pdp | "temperature": "25 C", 09:48:58 policy-opa-pdp | "windspeed": "4.0 m/s" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Trincomalee", 09:48:58 policy-opa-pdp | "precipitation": "1000 mm", 09:48:58 policy-opa-pdp | "temperature": "20 C", 09:48:58 policy-opa-pdp | "windspeed": "5.0 m/s" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | "Provenance": { 09:48:58 policy-opa-pdp | "version": "1.1.0", 09:48:58 policy-opa-pdp | "build_commit": "", 09:48:58 policy-opa-pdp | "build_timestamp": "", 09:48:58 policy-opa-pdp | "build_hostname": "" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6020+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6021+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6023+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | WARN[2025-06-19T09:47:53.6023+00:00] Policy Name abc does not exist 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6141+00:00] PDP received a decision request. 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6142+00:00] Headers processed for requestId: Unknown 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6145+00:00] Validation successful for request fields 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6145+00:00] SDK making a decision 09:48:58 policy-opa-pdp | {"decision_id":"e35a96d8-9725-4ee1-9926-2006461fbf25","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"ef72ff35-4194-46e7-b48c-ea86b3d870a5","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":940,"timer_rego_query_eval_ns":977021,"timer_sdk_decision_eval_ns":1124394},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-19T09:47:53Z","timestamp":"2025-06-19T09:47:53.614659251Z","type":"openpolicyagent.org/decision_logs"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:53.6161+00:00] RAW opa Decision output: 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "ID": "e35a96d8-9725-4ee1-9926-2006461fbf25", 09:48:58 policy-opa-pdp | "Result": { 09:48:58 policy-opa-pdp | "action_is_read": true, 09:48:58 policy-opa-pdp | "allow": true, 09:48:58 policy-opa-pdp | "viewable_sensor_data": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Galle", 09:48:58 policy-opa-pdp | "precipitation": "500 mm", 09:48:58 policy-opa-pdp | "temperature": "35 C", 09:48:58 policy-opa-pdp | "windspeed": "7.2 m/s" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Jaffna", 09:48:58 policy-opa-pdp | "precipitation": "300 mm", 09:48:58 policy-opa-pdp | "temperature": "-5 C", 09:48:58 policy-opa-pdp | "windspeed": "3.8 m/s" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Nuwara Eliya", 09:48:58 policy-opa-pdp | "precipitation": "600 mm", 09:48:58 policy-opa-pdp | "temperature": "25 C", 09:48:58 policy-opa-pdp | "windspeed": "4.0 m/s" 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "location": "Trincomalee", 09:48:58 policy-opa-pdp | "precipitation": "1000 mm", 09:48:58 policy-opa-pdp | "temperature": "20 C", 09:48:58 policy-opa-pdp | "windspeed": "5.0 m/s" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | }, 09:48:58 policy-opa-pdp | "Provenance": { 09:48:58 policy-opa-pdp | "version": "1.1.0", 09:48:58 policy-opa-pdp | "build_commit": "", 09:48:58 policy-opa-pdp | "build_timestamp": "", 09:48:58 policy-opa-pdp | "build_hostname": "" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1924+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"dc451126-590b-4085-93d5-dccf4e99bfd1","timestampMs":1750326474170,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1925+00:00] messageType: PDP_UPDATE 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1929+00:00] PDP_UPDATE Message received: {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"dc451126-590b-4085-93d5-dccf4e99bfd1","timestampMs":1750326474170,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:54.1930+00:00] Found Policies to be undeployed 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:54.1931+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1933+00:00] Deleting Policy from OPA : /abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1970+00:00] Removing policy directory: /opt/policies/abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1980+00:00] Deleting data from OPA : /node/abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1981+00:00] Analyzing dataPath: /node/abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1982+00:00] Path segments: [ node abac] 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1983+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1984+00:00] Removing data directory: /opt/data/node/abac 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:54.1987+00:00] PoliciesDeployed Map: { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1988+00:00] Policies Map After Undeployment : { 09:48:58 policy-opa-pdp | "deployed_policies_dict": [ 09:48:58 policy-opa-pdp | { 09:48:58 policy-opa-pdp | "data": [ 09:48:58 policy-opa-pdp | "node.slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy": [ 09:48:58 policy-opa-pdp | "slice.capacity.check" 09:48:58 policy-opa-pdp | ], 09:48:58 policy-opa-pdp | "policy-id": "slice.capacity.check", 09:48:58 policy-opa-pdp | "policy-version": "1.0.0" 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | ] 09:48:58 policy-opa-pdp | } 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:54.1989+00:00] Processed policies_to_be_undeployed successfully 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:54.1990+00:00] Sending PDP Status With Update Response 09:48:58 policy-opa-pdp | 2025/06/19 09:47:54 KafkaProducer or producer produce message 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1993+00:00] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"dc451126-590b-4085-93d5-dccf4e99bfd1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"1d943df2-9a51-4de8-894c-c4518a5ee104","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326474199","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | INFO[2025-06-19T09:47:54.1994+00:00] PDP_STATUS Message Sent Successfully 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.1995+00:00] 0 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.2071+00:00] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"dc451126-590b-4085-93d5-dccf4e99bfd1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"1d943df2-9a51-4de8-894c-c4518a5ee104","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326474199","deploymentInstanceInfo":""} 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.2073+00:00] messageType: PDP_STATUS 09:48:58 policy-opa-pdp | DEBU[2025-06-19T09:47:54.2073+00:00] discarding event of type PDP_STATUS 09:48:58 policy-pap | Waiting for api port 6969... 09:48:58 policy-pap | api (172.17.0.6:6969) open 09:48:58 policy-pap | Waiting for kafka port 9092... 09:48:58 policy-pap | kafka (172.17.0.8:9092) open 09:48:58 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 09:48:58 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 09:48:58 policy-pap | 09:48:58 policy-pap | . ____ _ __ _ _ 09:48:58 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:48:58 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:48:58 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:48:58 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 09:48:58 policy-pap | =========|_|==============|___/=/_/_/_/ 09:48:58 policy-pap | 09:48:58 policy-pap | :: Spring Boot :: (v3.4.6) 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:16.970+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 63 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 09:48:58 policy-pap | [2025-06-19T09:43:16.972+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 09:48:58 policy-pap | [2025-06-19T09:43:18.498+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:48:58 policy-pap | [2025-06-19T09:43:18.592+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 81 ms. Found 7 JPA repository interfaces. 09:48:58 policy-pap | [2025-06-19T09:43:19.619+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 09:48:58 policy-pap | [2025-06-19T09:43:19.634+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:48:58 policy-pap | [2025-06-19T09:43:19.636+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:48:58 policy-pap | [2025-06-19T09:43:19.637+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 09:48:58 policy-pap | [2025-06-19T09:43:19.695+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 09:48:58 policy-pap | [2025-06-19T09:43:19.695+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2653 ms 09:48:58 policy-pap | [2025-06-19T09:43:20.159+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:48:58 policy-pap | [2025-06-19T09:43:20.235+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 09:48:58 policy-pap | [2025-06-19T09:43:20.279+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:48:58 policy-pap | [2025-06-19T09:43:20.773+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:48:58 policy-pap | [2025-06-19T09:43:20.850+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:48:58 policy-pap | [2025-06-19T09:43:21.135+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 09:48:58 policy-pap | [2025-06-19T09:43:21.137+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:48:58 policy-pap | [2025-06-19T09:43:21.258+00:00|INFO|pooling|main] HHH10001005: Database info: 09:48:58 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 09:48:58 policy-pap | Database driver: undefined/unknown 09:48:58 policy-pap | Database version: 16.4 09:48:58 policy-pap | Autocommit mode: undefined/unknown 09:48:58 policy-pap | Isolation level: undefined/unknown 09:48:58 policy-pap | Minimum pool size: undefined/unknown 09:48:58 policy-pap | Maximum pool size: undefined/unknown 09:48:58 policy-pap | [2025-06-19T09:43:23.694+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:48:58 policy-pap | [2025-06-19T09:43:23.699+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:48:58 policy-pap | [2025-06-19T09:43:25.153+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:48:58 policy-pap | allow.auto.create.topics = true 09:48:58 policy-pap | auto.commit.interval.ms = 5000 09:48:58 policy-pap | auto.include.jmx.reporter = true 09:48:58 policy-pap | auto.offset.reset = latest 09:48:58 policy-pap | bootstrap.servers = [kafka:9092] 09:48:58 policy-pap | check.crcs = true 09:48:58 policy-pap | client.dns.lookup = use_all_dns_ips 09:48:58 policy-pap | client.id = consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-1 09:48:58 policy-pap | client.rack = 09:48:58 policy-pap | connections.max.idle.ms = 540000 09:48:58 policy-pap | default.api.timeout.ms = 60000 09:48:58 policy-pap | enable.auto.commit = true 09:48:58 policy-pap | enable.metrics.push = true 09:48:58 policy-pap | exclude.internal.topics = true 09:48:58 policy-pap | fetch.max.bytes = 52428800 09:48:58 policy-pap | fetch.max.wait.ms = 500 09:48:58 policy-pap | fetch.min.bytes = 1 09:48:58 policy-pap | group.id = be3aa0f9-34f3-4045-970c-8ec59634b69d 09:48:58 policy-pap | group.instance.id = null 09:48:58 policy-pap | group.protocol = classic 09:48:58 policy-pap | group.remote.assignor = null 09:48:58 policy-pap | heartbeat.interval.ms = 3000 09:48:58 policy-pap | interceptor.classes = [] 09:48:58 policy-pap | internal.leave.group.on.close = true 09:48:58 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:48:58 policy-pap | isolation.level = read_uncommitted 09:48:58 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | max.partition.fetch.bytes = 1048576 09:48:58 policy-pap | max.poll.interval.ms = 300000 09:48:58 policy-pap | max.poll.records = 500 09:48:58 policy-pap | metadata.max.age.ms = 300000 09:48:58 policy-pap | metadata.recovery.strategy = none 09:48:58 policy-pap | metric.reporters = [] 09:48:58 policy-pap | metrics.num.samples = 2 09:48:58 policy-pap | metrics.recording.level = INFO 09:48:58 policy-pap | metrics.sample.window.ms = 30000 09:48:58 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:48:58 policy-pap | receive.buffer.bytes = 65536 09:48:58 policy-pap | reconnect.backoff.max.ms = 1000 09:48:58 policy-pap | reconnect.backoff.ms = 50 09:48:58 policy-pap | request.timeout.ms = 30000 09:48:58 policy-pap | retry.backoff.max.ms = 1000 09:48:58 policy-pap | retry.backoff.ms = 100 09:48:58 policy-pap | sasl.client.callback.handler.class = null 09:48:58 policy-pap | sasl.jaas.config = null 09:48:58 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:58 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:48:58 policy-pap | sasl.kerberos.service.name = null 09:48:58 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:58 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:58 policy-pap | sasl.login.callback.handler.class = null 09:48:58 policy-pap | sasl.login.class = null 09:48:58 policy-pap | sasl.login.connect.timeout.ms = null 09:48:58 policy-pap | sasl.login.read.timeout.ms = null 09:48:58 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:48:58 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:48:58 policy-pap | sasl.login.refresh.window.factor = 0.8 09:48:58 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:48:58 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.login.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.mechanism = GSSAPI 09:48:58 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:48:58 policy-pap | sasl.oauthbearer.expected.audience = null 09:48:58 policy-pap | sasl.oauthbearer.expected.issuer = null 09:48:58 policy-pap | sasl.oauthbearer.header.urlencode = false 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:48:58 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:48:58 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:48:58 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:48:58 policy-pap | security.protocol = PLAINTEXT 09:48:58 policy-pap | security.providers = null 09:48:58 policy-pap | send.buffer.bytes = 131072 09:48:58 policy-pap | session.timeout.ms = 45000 09:48:58 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:48:58 policy-pap | socket.connection.setup.timeout.ms = 10000 09:48:58 policy-pap | ssl.cipher.suites = null 09:48:58 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:58 policy-pap | ssl.endpoint.identification.algorithm = https 09:48:58 policy-pap | ssl.engine.factory.class = null 09:48:58 policy-pap | ssl.key.password = null 09:48:58 policy-pap | ssl.keymanager.algorithm = SunX509 09:48:58 policy-pap | ssl.keystore.certificate.chain = null 09:48:58 policy-pap | ssl.keystore.key = null 09:48:58 policy-pap | ssl.keystore.location = null 09:48:58 policy-pap | ssl.keystore.password = null 09:48:58 policy-pap | ssl.keystore.type = JKS 09:48:58 policy-pap | ssl.protocol = TLSv1.3 09:48:58 policy-pap | ssl.provider = null 09:48:58 policy-pap | ssl.secure.random.implementation = null 09:48:58 policy-pap | ssl.trustmanager.algorithm = PKIX 09:48:58 policy-pap | ssl.truststore.certificates = null 09:48:58 policy-pap | ssl.truststore.location = null 09:48:58 policy-pap | ssl.truststore.password = null 09:48:58 policy-pap | ssl.truststore.type = JKS 09:48:58 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:25.207+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:48:58 policy-pap | [2025-06-19T09:43:25.357+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:48:58 policy-pap | [2025-06-19T09:43:25.357+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:48:58 policy-pap | [2025-06-19T09:43:25.357+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750326205356 09:48:58 policy-pap | [2025-06-19T09:43:25.360+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-1, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Subscribed to topic(s): policy-pdp-pap 09:48:58 policy-pap | [2025-06-19T09:43:25.361+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:48:58 policy-pap | allow.auto.create.topics = true 09:48:58 policy-pap | auto.commit.interval.ms = 5000 09:48:58 policy-pap | auto.include.jmx.reporter = true 09:48:58 policy-pap | auto.offset.reset = latest 09:48:58 policy-pap | bootstrap.servers = [kafka:9092] 09:48:58 policy-pap | check.crcs = true 09:48:58 policy-pap | client.dns.lookup = use_all_dns_ips 09:48:58 policy-pap | client.id = consumer-policy-pap-2 09:48:58 policy-pap | client.rack = 09:48:58 policy-pap | connections.max.idle.ms = 540000 09:48:58 policy-pap | default.api.timeout.ms = 60000 09:48:58 policy-pap | enable.auto.commit = true 09:48:58 policy-pap | enable.metrics.push = true 09:48:58 policy-pap | exclude.internal.topics = true 09:48:58 policy-pap | fetch.max.bytes = 52428800 09:48:58 policy-pap | fetch.max.wait.ms = 500 09:48:58 policy-pap | fetch.min.bytes = 1 09:48:58 policy-pap | group.id = policy-pap 09:48:58 policy-pap | group.instance.id = null 09:48:58 policy-pap | group.protocol = classic 09:48:58 policy-pap | group.remote.assignor = null 09:48:58 policy-pap | heartbeat.interval.ms = 3000 09:48:58 policy-pap | interceptor.classes = [] 09:48:58 policy-pap | internal.leave.group.on.close = true 09:48:58 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:48:58 policy-pap | isolation.level = read_uncommitted 09:48:58 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | max.partition.fetch.bytes = 1048576 09:48:58 policy-pap | max.poll.interval.ms = 300000 09:48:58 policy-pap | max.poll.records = 500 09:48:58 policy-pap | metadata.max.age.ms = 300000 09:48:58 policy-pap | metadata.recovery.strategy = none 09:48:58 policy-pap | metric.reporters = [] 09:48:58 policy-pap | metrics.num.samples = 2 09:48:58 policy-pap | metrics.recording.level = INFO 09:48:58 policy-pap | metrics.sample.window.ms = 30000 09:48:58 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:48:58 policy-pap | receive.buffer.bytes = 65536 09:48:58 policy-pap | reconnect.backoff.max.ms = 1000 09:48:58 policy-pap | reconnect.backoff.ms = 50 09:48:58 policy-pap | request.timeout.ms = 30000 09:48:58 policy-pap | retry.backoff.max.ms = 1000 09:48:58 policy-pap | retry.backoff.ms = 100 09:48:58 policy-pap | sasl.client.callback.handler.class = null 09:48:58 policy-pap | sasl.jaas.config = null 09:48:58 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:58 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:48:58 policy-pap | sasl.kerberos.service.name = null 09:48:58 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:58 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:58 policy-pap | sasl.login.callback.handler.class = null 09:48:58 policy-pap | sasl.login.class = null 09:48:58 policy-pap | sasl.login.connect.timeout.ms = null 09:48:58 policy-pap | sasl.login.read.timeout.ms = null 09:48:58 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:48:58 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:48:58 policy-pap | sasl.login.refresh.window.factor = 0.8 09:48:58 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:48:58 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.login.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.mechanism = GSSAPI 09:48:58 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:48:58 policy-pap | sasl.oauthbearer.expected.audience = null 09:48:58 policy-pap | sasl.oauthbearer.expected.issuer = null 09:48:58 policy-pap | sasl.oauthbearer.header.urlencode = false 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:48:58 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:48:58 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:48:58 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:48:58 policy-pap | security.protocol = PLAINTEXT 09:48:58 policy-pap | security.providers = null 09:48:58 policy-pap | send.buffer.bytes = 131072 09:48:58 policy-pap | session.timeout.ms = 45000 09:48:58 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:48:58 policy-pap | socket.connection.setup.timeout.ms = 10000 09:48:58 policy-pap | ssl.cipher.suites = null 09:48:58 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:58 policy-pap | ssl.endpoint.identification.algorithm = https 09:48:58 policy-pap | ssl.engine.factory.class = null 09:48:58 policy-pap | ssl.key.password = null 09:48:58 policy-pap | ssl.keymanager.algorithm = SunX509 09:48:58 policy-pap | ssl.keystore.certificate.chain = null 09:48:58 policy-pap | ssl.keystore.key = null 09:48:58 policy-pap | ssl.keystore.location = null 09:48:58 policy-pap | ssl.keystore.password = null 09:48:58 policy-pap | ssl.keystore.type = JKS 09:48:58 policy-pap | ssl.protocol = TLSv1.3 09:48:58 policy-pap | ssl.provider = null 09:48:58 policy-pap | ssl.secure.random.implementation = null 09:48:58 policy-pap | ssl.trustmanager.algorithm = PKIX 09:48:58 policy-pap | ssl.truststore.certificates = null 09:48:58 policy-pap | ssl.truststore.location = null 09:48:58 policy-pap | ssl.truststore.password = null 09:48:58 policy-pap | ssl.truststore.type = JKS 09:48:58 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:25.362+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:48:58 policy-pap | [2025-06-19T09:43:25.370+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:48:58 policy-pap | [2025-06-19T09:43:25.370+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:48:58 policy-pap | [2025-06-19T09:43:25.370+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750326205370 09:48:58 policy-pap | [2025-06-19T09:43:25.370+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:48:58 policy-pap | [2025-06-19T09:43:25.769+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 09:48:58 policy-pap | [2025-06-19T09:43:25.934+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:48:58 policy-pap | [2025-06-19T09:43:26.039+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 09:48:58 policy-pap | [2025-06-19T09:43:26.328+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 09:48:58 policy-pap | [2025-06-19T09:43:27.131+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 09:48:58 policy-pap | [2025-06-19T09:43:27.243+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:48:58 policy-pap | [2025-06-19T09:43:27.268+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 09:48:58 policy-pap | [2025-06-19T09:43:27.290+00:00|INFO|ServiceManager|main] Policy PAP starting 09:48:58 policy-pap | [2025-06-19T09:43:27.290+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 09:48:58 policy-pap | [2025-06-19T09:43:27.291+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 09:48:58 policy-pap | [2025-06-19T09:43:27.291+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 09:48:58 policy-pap | [2025-06-19T09:43:27.291+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 09:48:58 policy-pap | [2025-06-19T09:43:27.292+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 09:48:58 policy-pap | [2025-06-19T09:43:27.292+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 09:48:58 policy-pap | [2025-06-19T09:43:27.293+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=be3aa0f9-34f3-4045-970c-8ec59634b69d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e60dce7 09:48:58 policy-pap | [2025-06-19T09:43:27.304+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=be3aa0f9-34f3-4045-970c-8ec59634b69d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:48:58 policy-pap | [2025-06-19T09:43:27.304+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:48:58 policy-pap | allow.auto.create.topics = true 09:48:58 policy-pap | auto.commit.interval.ms = 5000 09:48:58 policy-pap | auto.include.jmx.reporter = true 09:48:58 policy-pap | auto.offset.reset = latest 09:48:58 policy-pap | bootstrap.servers = [kafka:9092] 09:48:58 policy-pap | check.crcs = true 09:48:58 policy-pap | client.dns.lookup = use_all_dns_ips 09:48:58 policy-pap | client.id = consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3 09:48:58 policy-pap | client.rack = 09:48:58 policy-pap | connections.max.idle.ms = 540000 09:48:58 policy-pap | default.api.timeout.ms = 60000 09:48:58 policy-pap | enable.auto.commit = true 09:48:58 policy-pap | enable.metrics.push = true 09:48:58 policy-pap | exclude.internal.topics = true 09:48:58 policy-pap | fetch.max.bytes = 52428800 09:48:58 policy-pap | fetch.max.wait.ms = 500 09:48:58 policy-pap | fetch.min.bytes = 1 09:48:58 policy-pap | group.id = be3aa0f9-34f3-4045-970c-8ec59634b69d 09:48:58 policy-pap | group.instance.id = null 09:48:58 policy-pap | group.protocol = classic 09:48:58 policy-pap | group.remote.assignor = null 09:48:58 policy-pap | heartbeat.interval.ms = 3000 09:48:58 policy-pap | interceptor.classes = [] 09:48:58 policy-pap | internal.leave.group.on.close = true 09:48:58 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:48:58 policy-pap | isolation.level = read_uncommitted 09:48:58 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | max.partition.fetch.bytes = 1048576 09:48:58 policy-pap | max.poll.interval.ms = 300000 09:48:58 policy-pap | max.poll.records = 500 09:48:58 policy-pap | metadata.max.age.ms = 300000 09:48:58 policy-pap | metadata.recovery.strategy = none 09:48:58 policy-pap | metric.reporters = [] 09:48:58 policy-pap | metrics.num.samples = 2 09:48:58 policy-pap | metrics.recording.level = INFO 09:48:58 policy-pap | metrics.sample.window.ms = 30000 09:48:58 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:48:58 policy-pap | receive.buffer.bytes = 65536 09:48:58 policy-pap | reconnect.backoff.max.ms = 1000 09:48:58 policy-pap | reconnect.backoff.ms = 50 09:48:58 policy-pap | request.timeout.ms = 30000 09:48:58 policy-pap | retry.backoff.max.ms = 1000 09:48:58 policy-pap | retry.backoff.ms = 100 09:48:58 policy-pap | sasl.client.callback.handler.class = null 09:48:58 policy-pap | sasl.jaas.config = null 09:48:58 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:58 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:48:58 policy-pap | sasl.kerberos.service.name = null 09:48:58 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:58 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:58 policy-pap | sasl.login.callback.handler.class = null 09:48:58 policy-pap | sasl.login.class = null 09:48:58 policy-pap | sasl.login.connect.timeout.ms = null 09:48:58 policy-pap | sasl.login.read.timeout.ms = null 09:48:58 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:48:58 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:48:58 policy-pap | sasl.login.refresh.window.factor = 0.8 09:48:58 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:48:58 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.login.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.mechanism = GSSAPI 09:48:58 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:48:58 policy-pap | sasl.oauthbearer.expected.audience = null 09:48:58 policy-pap | sasl.oauthbearer.expected.issuer = null 09:48:58 policy-pap | sasl.oauthbearer.header.urlencode = false 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:48:58 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:48:58 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:48:58 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:48:58 policy-pap | security.protocol = PLAINTEXT 09:48:58 policy-pap | security.providers = null 09:48:58 policy-pap | send.buffer.bytes = 131072 09:48:58 policy-pap | session.timeout.ms = 45000 09:48:58 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:48:58 policy-pap | socket.connection.setup.timeout.ms = 10000 09:48:58 policy-pap | ssl.cipher.suites = null 09:48:58 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:58 policy-pap | ssl.endpoint.identification.algorithm = https 09:48:58 policy-pap | ssl.engine.factory.class = null 09:48:58 policy-pap | ssl.key.password = null 09:48:58 policy-pap | ssl.keymanager.algorithm = SunX509 09:48:58 policy-pap | ssl.keystore.certificate.chain = null 09:48:58 policy-pap | ssl.keystore.key = null 09:48:58 policy-pap | ssl.keystore.location = null 09:48:58 policy-pap | ssl.keystore.password = null 09:48:58 policy-pap | ssl.keystore.type = JKS 09:48:58 policy-pap | ssl.protocol = TLSv1.3 09:48:58 policy-pap | ssl.provider = null 09:48:58 policy-pap | ssl.secure.random.implementation = null 09:48:58 policy-pap | ssl.trustmanager.algorithm = PKIX 09:48:58 policy-pap | ssl.truststore.certificates = null 09:48:58 policy-pap | ssl.truststore.location = null 09:48:58 policy-pap | ssl.truststore.password = null 09:48:58 policy-pap | ssl.truststore.type = JKS 09:48:58 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:27.306+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:48:58 policy-pap | [2025-06-19T09:43:27.312+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:48:58 policy-pap | [2025-06-19T09:43:27.312+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:48:58 policy-pap | [2025-06-19T09:43:27.312+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750326207312 09:48:58 policy-pap | [2025-06-19T09:43:27.313+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Subscribed to topic(s): policy-pdp-pap 09:48:58 policy-pap | [2025-06-19T09:43:27.313+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 09:48:58 policy-pap | [2025-06-19T09:43:27.313+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=fb30ad7f-f7b4-4b47-9078-e648c03e8c75, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1222d0e4 09:48:58 policy-pap | [2025-06-19T09:43:27.313+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=fb30ad7f-f7b4-4b47-9078-e648c03e8c75, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:48:58 policy-pap | [2025-06-19T09:43:27.314+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:48:58 policy-pap | allow.auto.create.topics = true 09:48:58 policy-pap | auto.commit.interval.ms = 5000 09:48:58 policy-pap | auto.include.jmx.reporter = true 09:48:58 policy-pap | auto.offset.reset = latest 09:48:58 policy-pap | bootstrap.servers = [kafka:9092] 09:48:58 policy-pap | check.crcs = true 09:48:58 policy-pap | client.dns.lookup = use_all_dns_ips 09:48:58 policy-pap | client.id = consumer-policy-pap-4 09:48:58 policy-pap | client.rack = 09:48:58 policy-pap | connections.max.idle.ms = 540000 09:48:58 policy-pap | default.api.timeout.ms = 60000 09:48:58 policy-pap | enable.auto.commit = true 09:48:58 policy-pap | enable.metrics.push = true 09:48:58 policy-pap | exclude.internal.topics = true 09:48:58 policy-pap | fetch.max.bytes = 52428800 09:48:58 policy-pap | fetch.max.wait.ms = 500 09:48:58 policy-pap | fetch.min.bytes = 1 09:48:58 policy-pap | group.id = policy-pap 09:48:58 policy-pap | group.instance.id = null 09:48:58 policy-pap | group.protocol = classic 09:48:58 policy-pap | group.remote.assignor = null 09:48:58 policy-pap | heartbeat.interval.ms = 3000 09:48:58 policy-pap | interceptor.classes = [] 09:48:58 policy-pap | internal.leave.group.on.close = true 09:48:58 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:48:58 policy-pap | isolation.level = read_uncommitted 09:48:58 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | max.partition.fetch.bytes = 1048576 09:48:58 policy-pap | max.poll.interval.ms = 300000 09:48:58 policy-pap | max.poll.records = 500 09:48:58 policy-pap | metadata.max.age.ms = 300000 09:48:58 policy-pap | metadata.recovery.strategy = none 09:48:58 policy-pap | metric.reporters = [] 09:48:58 policy-pap | metrics.num.samples = 2 09:48:58 policy-pap | metrics.recording.level = INFO 09:48:58 policy-pap | metrics.sample.window.ms = 30000 09:48:58 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:48:58 policy-pap | receive.buffer.bytes = 65536 09:48:58 policy-pap | reconnect.backoff.max.ms = 1000 09:48:58 policy-pap | reconnect.backoff.ms = 50 09:48:58 policy-pap | request.timeout.ms = 30000 09:48:58 policy-pap | retry.backoff.max.ms = 1000 09:48:58 policy-pap | retry.backoff.ms = 100 09:48:58 policy-pap | sasl.client.callback.handler.class = null 09:48:58 policy-pap | sasl.jaas.config = null 09:48:58 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:58 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:48:58 policy-pap | sasl.kerberos.service.name = null 09:48:58 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:58 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:58 policy-pap | sasl.login.callback.handler.class = null 09:48:58 policy-pap | sasl.login.class = null 09:48:58 policy-pap | sasl.login.connect.timeout.ms = null 09:48:58 policy-pap | sasl.login.read.timeout.ms = null 09:48:58 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:48:58 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:48:58 policy-pap | sasl.login.refresh.window.factor = 0.8 09:48:58 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:48:58 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.login.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.mechanism = GSSAPI 09:48:58 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:48:58 policy-pap | sasl.oauthbearer.expected.audience = null 09:48:58 policy-pap | sasl.oauthbearer.expected.issuer = null 09:48:58 policy-pap | sasl.oauthbearer.header.urlencode = false 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:48:58 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:48:58 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:48:58 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:48:58 policy-pap | security.protocol = PLAINTEXT 09:48:58 policy-pap | security.providers = null 09:48:58 policy-pap | send.buffer.bytes = 131072 09:48:58 policy-pap | session.timeout.ms = 45000 09:48:58 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:48:58 policy-pap | socket.connection.setup.timeout.ms = 10000 09:48:58 policy-pap | ssl.cipher.suites = null 09:48:58 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:58 policy-pap | ssl.endpoint.identification.algorithm = https 09:48:58 policy-pap | ssl.engine.factory.class = null 09:48:58 policy-pap | ssl.key.password = null 09:48:58 policy-pap | ssl.keymanager.algorithm = SunX509 09:48:58 policy-pap | ssl.keystore.certificate.chain = null 09:48:58 policy-pap | ssl.keystore.key = null 09:48:58 policy-pap | ssl.keystore.location = null 09:48:58 policy-pap | ssl.keystore.password = null 09:48:58 policy-pap | ssl.keystore.type = JKS 09:48:58 policy-pap | ssl.protocol = TLSv1.3 09:48:58 policy-pap | ssl.provider = null 09:48:58 policy-pap | ssl.secure.random.implementation = null 09:48:58 policy-pap | ssl.trustmanager.algorithm = PKIX 09:48:58 policy-pap | ssl.truststore.certificates = null 09:48:58 policy-pap | ssl.truststore.location = null 09:48:58 policy-pap | ssl.truststore.password = null 09:48:58 policy-pap | ssl.truststore.type = JKS 09:48:58 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:27.314+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:48:58 policy-pap | [2025-06-19T09:43:27.320+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:48:58 policy-pap | [2025-06-19T09:43:27.320+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:48:58 policy-pap | [2025-06-19T09:43:27.320+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750326207320 09:48:58 policy-pap | [2025-06-19T09:43:27.321+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:48:58 policy-pap | [2025-06-19T09:43:27.321+00:00|INFO|ServiceManager|main] Policy PAP starting topics 09:48:58 policy-pap | [2025-06-19T09:43:27.321+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=fb30ad7f-f7b4-4b47-9078-e648c03e8c75, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:48:58 policy-pap | [2025-06-19T09:43:27.321+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=be3aa0f9-34f3-4045-970c-8ec59634b69d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:48:58 policy-pap | [2025-06-19T09:43:27.321+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6ca129f0-132a-4077-9313-58d3fd17d7a2, alive=false, publisher=null]]: starting 09:48:58 policy-pap | [2025-06-19T09:43:27.337+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:48:58 policy-pap | acks = -1 09:48:58 policy-pap | auto.include.jmx.reporter = true 09:48:58 policy-pap | batch.size = 16384 09:48:58 policy-pap | bootstrap.servers = [kafka:9092] 09:48:58 policy-pap | buffer.memory = 33554432 09:48:58 policy-pap | client.dns.lookup = use_all_dns_ips 09:48:58 policy-pap | client.id = producer-1 09:48:58 policy-pap | compression.gzip.level = -1 09:48:58 policy-pap | compression.lz4.level = 9 09:48:58 policy-pap | compression.type = none 09:48:58 policy-pap | compression.zstd.level = 3 09:48:58 policy-pap | connections.max.idle.ms = 540000 09:48:58 policy-pap | delivery.timeout.ms = 120000 09:48:58 policy-pap | enable.idempotence = true 09:48:58 policy-pap | enable.metrics.push = true 09:48:58 policy-pap | interceptor.classes = [] 09:48:58 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:48:58 policy-pap | linger.ms = 0 09:48:58 policy-pap | max.block.ms = 60000 09:48:58 policy-pap | max.in.flight.requests.per.connection = 5 09:48:58 policy-pap | max.request.size = 1048576 09:48:58 policy-pap | metadata.max.age.ms = 300000 09:48:58 policy-pap | metadata.max.idle.ms = 300000 09:48:58 policy-pap | metadata.recovery.strategy = none 09:48:58 policy-pap | metric.reporters = [] 09:48:58 policy-pap | metrics.num.samples = 2 09:48:58 policy-pap | metrics.recording.level = INFO 09:48:58 policy-pap | metrics.sample.window.ms = 30000 09:48:58 policy-pap | partitioner.adaptive.partitioning.enable = true 09:48:58 policy-pap | partitioner.availability.timeout.ms = 0 09:48:58 policy-pap | partitioner.class = null 09:48:58 policy-pap | partitioner.ignore.keys = false 09:48:58 policy-pap | receive.buffer.bytes = 32768 09:48:58 policy-pap | reconnect.backoff.max.ms = 1000 09:48:58 policy-pap | reconnect.backoff.ms = 50 09:48:58 policy-pap | request.timeout.ms = 30000 09:48:58 policy-pap | retries = 2147483647 09:48:58 policy-pap | retry.backoff.max.ms = 1000 09:48:58 policy-pap | retry.backoff.ms = 100 09:48:58 policy-pap | sasl.client.callback.handler.class = null 09:48:58 policy-pap | sasl.jaas.config = null 09:48:58 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:58 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:48:58 policy-pap | sasl.kerberos.service.name = null 09:48:58 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:58 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:58 policy-pap | sasl.login.callback.handler.class = null 09:48:58 policy-pap | sasl.login.class = null 09:48:58 policy-pap | sasl.login.connect.timeout.ms = null 09:48:58 policy-pap | sasl.login.read.timeout.ms = null 09:48:58 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:48:58 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:48:58 policy-pap | sasl.login.refresh.window.factor = 0.8 09:48:58 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:48:58 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.login.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.mechanism = GSSAPI 09:48:58 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:48:58 policy-pap | sasl.oauthbearer.expected.audience = null 09:48:58 policy-pap | sasl.oauthbearer.expected.issuer = null 09:48:58 policy-pap | sasl.oauthbearer.header.urlencode = false 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:48:58 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:48:58 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:48:58 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:48:58 policy-pap | security.protocol = PLAINTEXT 09:48:58 policy-pap | security.providers = null 09:48:58 policy-pap | send.buffer.bytes = 131072 09:48:58 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:48:58 policy-pap | socket.connection.setup.timeout.ms = 10000 09:48:58 policy-pap | ssl.cipher.suites = null 09:48:58 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:58 policy-pap | ssl.endpoint.identification.algorithm = https 09:48:58 policy-pap | ssl.engine.factory.class = null 09:48:58 policy-pap | ssl.key.password = null 09:48:58 policy-pap | ssl.keymanager.algorithm = SunX509 09:48:58 policy-pap | ssl.keystore.certificate.chain = null 09:48:58 policy-pap | ssl.keystore.key = null 09:48:58 policy-pap | ssl.keystore.location = null 09:48:58 policy-pap | ssl.keystore.password = null 09:48:58 policy-pap | ssl.keystore.type = JKS 09:48:58 policy-pap | ssl.protocol = TLSv1.3 09:48:58 policy-pap | ssl.provider = null 09:48:58 policy-pap | ssl.secure.random.implementation = null 09:48:58 policy-pap | ssl.trustmanager.algorithm = PKIX 09:48:58 policy-pap | ssl.truststore.certificates = null 09:48:58 policy-pap | ssl.truststore.location = null 09:48:58 policy-pap | ssl.truststore.password = null 09:48:58 policy-pap | ssl.truststore.type = JKS 09:48:58 policy-pap | transaction.timeout.ms = 60000 09:48:58 policy-pap | transactional.id = null 09:48:58 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:27.338+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:48:58 policy-pap | [2025-06-19T09:43:27.354+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 09:48:58 policy-pap | [2025-06-19T09:43:27.399+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:48:58 policy-pap | [2025-06-19T09:43:27.399+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:48:58 policy-pap | [2025-06-19T09:43:27.399+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750326207399 09:48:58 policy-pap | [2025-06-19T09:43:27.400+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6ca129f0-132a-4077-9313-58d3fd17d7a2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:48:58 policy-pap | [2025-06-19T09:43:27.400+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cf825650-c420-4695-adb7-68635567ef8e, alive=false, publisher=null]]: starting 09:48:58 policy-pap | [2025-06-19T09:43:27.401+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:48:58 policy-pap | acks = -1 09:48:58 policy-pap | auto.include.jmx.reporter = true 09:48:58 policy-pap | batch.size = 16384 09:48:58 policy-pap | bootstrap.servers = [kafka:9092] 09:48:58 policy-pap | buffer.memory = 33554432 09:48:58 policy-pap | client.dns.lookup = use_all_dns_ips 09:48:58 policy-pap | client.id = producer-2 09:48:58 policy-pap | compression.gzip.level = -1 09:48:58 policy-pap | compression.lz4.level = 9 09:48:58 policy-pap | compression.type = none 09:48:58 policy-pap | compression.zstd.level = 3 09:48:58 policy-pap | connections.max.idle.ms = 540000 09:48:58 policy-pap | delivery.timeout.ms = 120000 09:48:58 policy-pap | enable.idempotence = true 09:48:58 policy-pap | enable.metrics.push = true 09:48:58 policy-pap | interceptor.classes = [] 09:48:58 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:48:58 policy-pap | linger.ms = 0 09:48:58 policy-pap | max.block.ms = 60000 09:48:58 policy-pap | max.in.flight.requests.per.connection = 5 09:48:58 policy-pap | max.request.size = 1048576 09:48:58 policy-pap | metadata.max.age.ms = 300000 09:48:58 policy-pap | metadata.max.idle.ms = 300000 09:48:58 policy-pap | metadata.recovery.strategy = none 09:48:58 policy-pap | metric.reporters = [] 09:48:58 policy-pap | metrics.num.samples = 2 09:48:58 policy-pap | metrics.recording.level = INFO 09:48:58 policy-pap | metrics.sample.window.ms = 30000 09:48:58 policy-pap | partitioner.adaptive.partitioning.enable = true 09:48:58 policy-pap | partitioner.availability.timeout.ms = 0 09:48:58 policy-pap | partitioner.class = null 09:48:58 policy-pap | partitioner.ignore.keys = false 09:48:58 policy-pap | receive.buffer.bytes = 32768 09:48:58 policy-pap | reconnect.backoff.max.ms = 1000 09:48:58 policy-pap | reconnect.backoff.ms = 50 09:48:58 policy-pap | request.timeout.ms = 30000 09:48:58 policy-pap | retries = 2147483647 09:48:58 policy-pap | retry.backoff.max.ms = 1000 09:48:58 policy-pap | retry.backoff.ms = 100 09:48:58 policy-pap | sasl.client.callback.handler.class = null 09:48:58 policy-pap | sasl.jaas.config = null 09:48:58 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:48:58 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:48:58 policy-pap | sasl.kerberos.service.name = null 09:48:58 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:48:58 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:48:58 policy-pap | sasl.login.callback.handler.class = null 09:48:58 policy-pap | sasl.login.class = null 09:48:58 policy-pap | sasl.login.connect.timeout.ms = null 09:48:58 policy-pap | sasl.login.read.timeout.ms = null 09:48:58 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:48:58 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:48:58 policy-pap | sasl.login.refresh.window.factor = 0.8 09:48:58 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:48:58 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.login.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.mechanism = GSSAPI 09:48:58 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:48:58 policy-pap | sasl.oauthbearer.expected.audience = null 09:48:58 policy-pap | sasl.oauthbearer.expected.issuer = null 09:48:58 policy-pap | sasl.oauthbearer.header.urlencode = false 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:48:58 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:48:58 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:48:58 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:48:58 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:48:58 policy-pap | security.protocol = PLAINTEXT 09:48:58 policy-pap | security.providers = null 09:48:58 policy-pap | send.buffer.bytes = 131072 09:48:58 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:48:58 policy-pap | socket.connection.setup.timeout.ms = 10000 09:48:58 policy-pap | ssl.cipher.suites = null 09:48:58 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:48:58 policy-pap | ssl.endpoint.identification.algorithm = https 09:48:58 policy-pap | ssl.engine.factory.class = null 09:48:58 policy-pap | ssl.key.password = null 09:48:58 policy-pap | ssl.keymanager.algorithm = SunX509 09:48:58 policy-pap | ssl.keystore.certificate.chain = null 09:48:58 policy-pap | ssl.keystore.key = null 09:48:58 policy-pap | ssl.keystore.location = null 09:48:58 policy-pap | ssl.keystore.password = null 09:48:58 policy-pap | ssl.keystore.type = JKS 09:48:58 policy-pap | ssl.protocol = TLSv1.3 09:48:58 policy-pap | ssl.provider = null 09:48:58 policy-pap | ssl.secure.random.implementation = null 09:48:58 policy-pap | ssl.trustmanager.algorithm = PKIX 09:48:58 policy-pap | ssl.truststore.certificates = null 09:48:58 policy-pap | ssl.truststore.location = null 09:48:58 policy-pap | ssl.truststore.password = null 09:48:58 policy-pap | ssl.truststore.type = JKS 09:48:58 policy-pap | transaction.timeout.ms = 60000 09:48:58 policy-pap | transactional.id = null 09:48:58 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:48:58 policy-pap | 09:48:58 policy-pap | [2025-06-19T09:43:27.401+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 09:48:58 policy-pap | [2025-06-19T09:43:27.402+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 09:48:58 policy-pap | [2025-06-19T09:43:27.408+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 09:48:58 policy-pap | [2025-06-19T09:43:27.409+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 09:48:58 policy-pap | [2025-06-19T09:43:27.409+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750326207408 09:48:58 policy-pap | [2025-06-19T09:43:27.409+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cf825650-c420-4695-adb7-68635567ef8e, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:48:58 policy-pap | [2025-06-19T09:43:27.409+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 09:48:58 policy-pap | [2025-06-19T09:43:27.409+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 09:48:58 policy-pap | [2025-06-19T09:43:27.411+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 09:48:58 policy-pap | [2025-06-19T09:43:27.412+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 09:48:58 policy-pap | [2025-06-19T09:43:27.413+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 09:48:58 policy-pap | [2025-06-19T09:43:27.416+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 09:48:58 policy-pap | [2025-06-19T09:43:27.416+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 09:48:58 policy-pap | [2025-06-19T09:43:27.417+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 09:48:58 policy-pap | [2025-06-19T09:43:27.417+00:00|INFO|TimerManager|Thread-9] timer manager update started 09:48:58 policy-pap | [2025-06-19T09:43:27.418+00:00|INFO|ServiceManager|main] Policy PAP started 09:48:58 policy-pap | [2025-06-19T09:43:27.418+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 09:48:58 policy-pap | [2025-06-19T09:43:27.418+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.269 seconds (process running for 11.849) 09:48:58 policy-pap | [2025-06-19T09:43:27.844+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: o0KanLTCS6-DDA3O4PV1_w 09:48:58 policy-pap | [2025-06-19T09:43:27.844+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: o0KanLTCS6-DDA3O4PV1_w 09:48:58 policy-pap | [2025-06-19T09:43:27.856+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:48:58 policy-pap | [2025-06-19T09:43:27.856+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: o0KanLTCS6-DDA3O4PV1_w 09:48:58 policy-pap | [2025-06-19T09:43:27.889+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:43:27.889+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Cluster ID: o0KanLTCS6-DDA3O4PV1_w 09:48:58 policy-pap | [2025-06-19T09:43:27.896+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 09:48:58 policy-pap | [2025-06-19T09:43:27.897+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 09:48:58 policy-pap | [2025-06-19T09:43:28.008+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:48:58 policy-pap | [2025-06-19T09:43:28.028+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:43:28.244+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:43:28.258+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:43:28.669+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:43:28.714+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:43:29.553+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:48:58 policy-pap | [2025-06-19T09:43:29.560+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:48:58 policy-pap | [2025-06-19T09:43:29.587+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:48:58 policy-pap | [2025-06-19T09:43:29.590+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] (Re-)joining group 09:48:58 policy-pap | [2025-06-19T09:43:29.590+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf 09:48:58 policy-pap | [2025-06-19T09:43:29.591+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:48:58 policy-pap | [2025-06-19T09:43:29.599+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Request joining group due to: need to re-join with the given member-id: consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888 09:48:58 policy-pap | [2025-06-19T09:43:29.599+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] (Re-)joining group 09:48:58 policy-pap | [2025-06-19T09:43:32.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888', protocol='range'} 09:48:58 policy-pap | [2025-06-19T09:43:32.623+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf', protocol='range'} 09:48:58 policy-pap | [2025-06-19T09:43:32.631+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Finished assignment for group at generation 1: {consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888=Assignment(partitions=[policy-pdp-pap-0])} 09:48:58 policy-pap | [2025-06-19T09:43:32.631+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf=Assignment(partitions=[policy-pdp-pap-0])} 09:48:58 policy-pap | [2025-06-19T09:43:32.683+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-453b806e-3f06-4534-9706-bca0aee8c7cf', protocol='range'} 09:48:58 policy-pap | [2025-06-19T09:43:32.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3-35c318e7-8a52-4dcc-85bf-70368edd1888', protocol='range'} 09:48:58 policy-pap | [2025-06-19T09:43:32.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:48:58 policy-pap | [2025-06-19T09:43:32.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:48:58 policy-pap | [2025-06-19T09:43:32.690+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 09:48:58 policy-pap | [2025-06-19T09:43:32.690+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Adding newly assigned partitions: policy-pdp-pap-0 09:48:58 policy-pap | [2025-06-19T09:43:32.707+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 09:48:58 policy-pap | [2025-06-19T09:43:32.708+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Found no committed offset for partition policy-pdp-pap-0 09:48:58 policy-pap | [2025-06-19T09:43:32.723+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be3aa0f9-34f3-4045-970c-8ec59634b69d-3, groupId=be3aa0f9-34f3-4045-970c-8ec59634b69d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:48:58 policy-pap | [2025-06-19T09:43:32.724+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:48:58 policy-pap | [2025-06-19T09:43:41.610+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:48:58 policy-pap | [2025-06-19T09:43:41.610+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 09:48:58 policy-pap | [2025-06-19T09:43:41.612+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 09:48:58 policy-pap | [2025-06-19T09:45:23.205+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 09:48:58 policy-pap | [] 09:48:58 policy-pap | [2025-06-19T09:45:23.206+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"5853735e-618a-4254-a15c-a21b2acbf8b4","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750326323166","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:23.206+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"5853735e-618a-4254-a15c-a21b2acbf8b4","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750326323166","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:23.212+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:48:58 policy-pap | [2025-06-19T09:45:23.758+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:45:23.758+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:45:23.758+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:45:23.759+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=7dd8d058-c503-4f0c-a9fe-b4d4f19216f9, expireMs=1750326353759] 09:48:58 policy-pap | [2025-06-19T09:45:23.761+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:45:23.762+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=7dd8d058-c503-4f0c-a9fe-b4d4f19216f9, expireMs=1750326353759] 09:48:58 policy-pap | [2025-06-19T09:45:23.762+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:45:23.766+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","timestampMs":1750326323733,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:23.824+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","timestampMs":1750326323733,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:23.825+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:45:23.825+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","timestampMs":1750326323733,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:23.826+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:45:23.855+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"93be3da5-9e97-4334-a765-b0ca6ed3a779","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323842","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:23.857+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7dd8d058-c503-4f0c-a9fe-b4d4f19216f9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"93be3da5-9e97-4334-a765-b0ca6ed3a779","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323842","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:23.857+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:45:23.857+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7dd8d058-c503-4f0c-a9fe-b4d4f19216f9 09:48:58 policy-pap | [2025-06-19T09:45:23.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:45:23.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:45:23.858+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7dd8d058-c503-4f0c-a9fe-b4d4f19216f9, expireMs=1750326353759] 09:48:58 policy-pap | [2025-06-19T09:45:23.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:45:23.858+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b start publishing next request 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange starting 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange starting listener 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange starting timer 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=76aa2592-933e-4236-ab45-ef442a6da711, expireMs=1750326353868] 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange starting enqueue 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange started 09:48:58 policy-pap | [2025-06-19T09:45:23.868+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=76aa2592-933e-4236-ab45-ef442a6da711, expireMs=1750326353868] 09:48:58 policy-pap | [2025-06-19T09:45:23.870+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:48:58 policy-pap | [2025-06-19T09:45:23.870+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"76aa2592-933e-4236-ab45-ef442a6da711","timestampMs":1750326323734,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:23.885+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"76aa2592-933e-4236-ab45-ef442a6da711","timestampMs":1750326323734,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:23.886+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 09:48:58 policy-pap | [2025-06-19T09:45:23.894+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"76aa2592-933e-4236-ab45-ef442a6da711","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"26795b56-69ea-4882-b31e-97b1af122c5e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323881","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:23.894+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 76aa2592-933e-4236-ab45-ef442a6da711 09:48:58 policy-pap | [2025-06-19T09:45:23.896+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 09:48:58 policy-pap | [2025-06-19T09:45:24.244+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"76aa2592-933e-4236-ab45-ef442a6da711","timestampMs":1750326323734,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:24.244+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 09:48:58 policy-pap | [2025-06-19T09:45:24.248+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"76aa2592-933e-4236-ab45-ef442a6da711","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"26795b56-69ea-4882-b31e-97b1af122c5e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326323881","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:24.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange stopping 09:48:58 policy-pap | [2025-06-19T09:45:24.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:45:24.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange stopping timer 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=76aa2592-933e-4236-ab45-ef442a6da711, expireMs=1750326353868] 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange stopping listener 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange stopped 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpStateChange successful 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b start publishing next request 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=1b539a1c-1904-44ff-83e7-d5a08ccbce45, expireMs=1750326354249] 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:45:24.249+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","timestampMs":1750326324237,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:24.258+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","timestampMs":1750326324237,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:24.258+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:45:24.258+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","timestampMs":1750326324237,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:45:24.258+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:45:24.267+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"df2d7b1f-2f76-4bf5-a4e0-54cd5aa15cb1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326324256","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:24.267+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1b539a1c-1904-44ff-83e7-d5a08ccbce45","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"df2d7b1f-2f76-4bf5-a4e0-54cd5aa15cb1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326324256","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1b539a1c-1904-44ff-83e7-d5a08ccbce45 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=1b539a1c-1904-44ff-83e7-d5a08ccbce45, expireMs=1750326354249] 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:45:24.268+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:45:24.275+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:45:24.275+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:45:27.418+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 09:48:58 policy-pap | [2025-06-19T09:45:53.760+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=7dd8d058-c503-4f0c-a9fe-b4d4f19216f9, expireMs=1750326353759] 09:48:58 policy-pap | [2025-06-19T09:45:53.869+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=76aa2592-933e-4236-ab45-ef442a6da711, expireMs=1750326353868] 09:48:58 policy-pap | [2025-06-19T09:46:23.178+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"53ce1623-3a2d-4f45-b150-4b3dfd3370d6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326383165","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:46:23.179+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"53ce1623-3a2d-4f45-b150-4b3dfd3370d6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326383165","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:46:23.180+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:48:58 policy-pap | [2025-06-19T09:46:37.899+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:46:37.900+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-7] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 09:48:58 policy-pap | [2025-06-19T09:46:37.901+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering a deploy for policy zoneB 1.0.6 09:48:58 policy-pap | [2025-06-19T09:46:37.902+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-56bc6029-e683-4320-a6d7-f0316897aa5b opaGroup opa policies=1 09:48:58 policy-pap | [2025-06-19T09:46:37.903+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup 09:48:58 policy-pap | [2025-06-19T09:46:37.903+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup 09:48:58 policy-pap | [2025-06-19T09:46:37.918+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-19T09:46:37Z, user=policyadmin)] 09:48:58 policy-pap | [2025-06-19T09:46:37.946+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:46:37.946+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:46:37.946+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:46:37.946+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd, expireMs=1750326427946] 09:48:58 policy-pap | [2025-06-19T09:46:37.947+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:46:37.947+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:46:37.947+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd, expireMs=1750326427946] 09:48:58 policy-pap | [2025-06-19T09:46:37.947+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","timestampMs":1750326397902,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:46:37.954+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","timestampMs":1750326397902,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:46:37.954+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:46:37.955+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","timestampMs":1750326397902,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:46:37.955+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:46:38.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3a4e2f6a-7154-441c-b821-2775e96189c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326397992","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:46:38.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:46:38.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:46:38.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:46:38.006+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd, expireMs=1750326427946] 09:48:58 policy-pap | [2025-06-19T09:46:38.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:46:38.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:46:38.008+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3a4e2f6a-7154-441c-b821-2775e96189c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326397992","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:46:38.009+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd 09:48:58 policy-pap | [2025-06-19T09:46:38.022+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:46:38.023+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:46:38.023+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:48:58 policy-pap | [2025-06-19T09:47:02.529+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:02.531+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 09:48:58 policy-pap | [2025-06-19T09:47:02.531+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 09:48:58 policy-pap | [2025-06-19T09:47:02.531+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-56bc6029-e683-4320-a6d7-f0316897aa5b opaGroup opa policies=0 09:48:58 policy-pap | [2025-06-19T09:47:02.532+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:02.532+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:02.546+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-19T09:47:02Z, user=policyadmin)] 09:48:58 policy-pap | [2025-06-19T09:47:02.564+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:47:02.564+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:47:02.564+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:47:02.564+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=763b7dfc-1e2e-45d1-a073-925b1118661f, expireMs=1750326452564] 09:48:58 policy-pap | [2025-06-19T09:47:02.564+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:47:02.565+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:47:02.565+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"763b7dfc-1e2e-45d1-a073-925b1118661f","timestampMs":1750326422531,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:02.575+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"763b7dfc-1e2e-45d1-a073-925b1118661f","timestampMs":1750326422531,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:02.575+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:02.576+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"763b7dfc-1e2e-45d1-a073-925b1118661f","timestampMs":1750326422531,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:02.577+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:02.591+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"763b7dfc-1e2e-45d1-a073-925b1118661f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"418427c8-d212-4917-b5d5-b7a57fe75342","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326422576","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:02.591+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"763b7dfc-1e2e-45d1-a073-925b1118661f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"418427c8-d212-4917-b5d5-b7a57fe75342","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326422576","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 763b7dfc-1e2e-45d1-a073-925b1118661f 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=763b7dfc-1e2e-45d1-a073-925b1118661f, expireMs=1750326452564] 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:47:02.592+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:47:02.608+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:47:02.608+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:47:02.608+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 09:48:58 policy-pap | [2025-06-19T09:47:03.048+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:03.050+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: zoneB null 09:48:58 policy-pap | [2025-06-19T09:47:03.051+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed 09:48:58 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:48:58 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:48:58 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 09:48:58 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 09:48:58 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 09:48:58 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 09:48:58 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 09:48:58 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 09:48:58 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 09:48:58 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 09:48:58 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 09:48:58 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 09:48:58 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:48:58 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 09:48:58 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 09:48:58 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 09:48:58 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 09:48:58 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 09:48:58 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 09:48:58 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 09:48:58 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 09:48:58 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 09:48:58 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 09:48:58 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 09:48:58 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 09:48:58 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 09:48:58 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 09:48:58 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 09:48:58 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 09:48:58 policy-pap | [2025-06-19T09:47:03.851+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:03.851+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-1] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 09:48:58 policy-pap | [2025-06-19T09:47:03.851+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy vehicle 1.0.6 09:48:58 policy-pap | [2025-06-19T09:47:03.851+00:00|INFO|SessionData|http-nio-6969-exec-1] add update opa-56bc6029-e683-4320-a6d7-f0316897aa5b opaGroup opa policies=1 09:48:58 policy-pap | [2025-06-19T09:47:03.851+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:03.851+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:03.859+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-19T09:47:03Z, user=policyadmin)] 09:48:58 policy-pap | [2025-06-19T09:47:03.869+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:47:03.869+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:47:03.869+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:47:03.869+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=d80aabc3-a6fb-46f5-82f3-7a82d47b9a14, expireMs=1750326453869] 09:48:58 policy-pap | [2025-06-19T09:47:03.869+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:47:03.869+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:47:03.870+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","timestampMs":1750326423851,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:03.880+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","timestampMs":1750326423851,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:03.880+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:03.882+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","timestampMs":1750326423851,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:03.882+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:03.920+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"0fc73e43-24dc-4686-8cd5-4f45f3919885","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326423907","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:03.921+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d80aabc3-a6fb-46f5-82f3-7a82d47b9a14","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"0fc73e43-24dc-4686-8cd5-4f45f3919885","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326423907","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d80aabc3-a6fb-46f5-82f3-7a82d47b9a14 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d80aabc3-a6fb-46f5-82f3-7a82d47b9a14, expireMs=1750326453869] 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:47:03.923+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:47:03.936+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:47:03.936+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:47:03.936+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:48:58 policy-pap | [2025-06-19T09:47:07.946+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=33cf87c0-6f01-4d54-8ddf-4af07f9a8cdd, expireMs=1750326427946] 09:48:58 policy-pap | [2025-06-19T09:47:23.860+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"8a83170a-7682-4fb2-8f36-47c4206d4590","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326443845","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:23.860+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"8a83170a-7682-4fb2-8f36-47c4206d4590","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326443845","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:23.861+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:48:58 policy-pap | [2025-06-19T09:47:27.430+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 09:48:58 policy-pap | [2025-06-19T09:47:28.325+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:28.325+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 09:48:58 policy-pap | [2025-06-19T09:47:28.326+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 09:48:58 policy-pap | [2025-06-19T09:47:28.326+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-56bc6029-e683-4320-a6d7-f0316897aa5b opaGroup opa policies=0 09:48:58 policy-pap | [2025-06-19T09:47:28.326+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:28.326+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:28.333+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-19T09:47:28Z, user=policyadmin)] 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=01008446-0225-4952-be3e-34088d5cf19c, expireMs=1750326478342] 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"01008446-0225-4952-be3e-34088d5cf19c","timestampMs":1750326448326,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=01008446-0225-4952-be3e-34088d5cf19c, expireMs=1750326478342] 09:48:58 policy-pap | [2025-06-19T09:47:28.342+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:47:28.350+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"01008446-0225-4952-be3e-34088d5cf19c","timestampMs":1750326448326,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:28.351+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:28.357+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"01008446-0225-4952-be3e-34088d5cf19c","timestampMs":1750326448326,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:28.357+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:28.362+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"01008446-0225-4952-be3e-34088d5cf19c","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3b25c57f-4c00-4728-a0ce-d928cf43c0ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326448352","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:28.363+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 01008446-0225-4952-be3e-34088d5cf19c 09:48:58 policy-pap | [2025-06-19T09:47:28.367+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"01008446-0225-4952-be3e-34088d5cf19c","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"3b25c57f-4c00-4728-a0ce-d928cf43c0ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326448352","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:28.368+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:47:28.368+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:47:28.368+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:47:28.368+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=01008446-0225-4952-be3e-34088d5cf19c, expireMs=1750326478342] 09:48:58 policy-pap | [2025-06-19T09:47:28.368+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:47:28.368+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:47:28.376+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:47:28.376+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:47:28.376+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 09:48:58 policy-pap | [2025-06-19T09:47:28.745+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:28.745+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-5] failed to undeploy policy: vehicle null 09:48:58 policy-pap | [2025-06-19T09:47:28.745+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-5] undeploy policy failed 09:48:58 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:48:58 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:48:58 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 09:48:58 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 09:48:58 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 09:48:58 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 09:48:58 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 09:48:58 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 09:48:58 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 09:48:58 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 09:48:58 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 09:48:58 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 09:48:58 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:48:58 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 09:48:58 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 09:48:58 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 09:48:58 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 09:48:58 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 09:48:58 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 09:48:58 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 09:48:58 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 09:48:58 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 09:48:58 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 09:48:58 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 09:48:58 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 09:48:58 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 09:48:58 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 09:48:58 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 09:48:58 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 09:48:58 policy-pap | [2025-06-19T09:47:29.481+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:29.481+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 09:48:58 policy-pap | [2025-06-19T09:47:29.481+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 09:48:58 policy-pap | [2025-06-19T09:47:29.481+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-56bc6029-e683-4320-a6d7-f0316897aa5b opaGroup opa policies=1 09:48:58 policy-pap | [2025-06-19T09:47:29.481+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:29.481+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:29.488+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-19T09:47:29Z, user=policyadmin)] 09:48:58 policy-pap | [2025-06-19T09:47:29.496+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:47:29.496+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:47:29.496+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:47:29.496+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=c64624c3-c637-4201-88e3-7b3627bbd0fb, expireMs=1750326479496] 09:48:58 policy-pap | [2025-06-19T09:47:29.496+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:47:29.496+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:47:29.497+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c64624c3-c637-4201-88e3-7b3627bbd0fb","timestampMs":1750326449481,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:29.507+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c64624c3-c637-4201-88e3-7b3627bbd0fb","timestampMs":1750326449481,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:29.508+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c64624c3-c637-4201-88e3-7b3627bbd0fb","timestampMs":1750326449481,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:29.508+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:29.508+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:29.544+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c64624c3-c637-4201-88e3-7b3627bbd0fb","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"c31760b8-8009-4602-beb8-944ab8c69ba1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326449531","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:29.544+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c64624c3-c637-4201-88e3-7b3627bbd0fb","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"c31760b8-8009-4602-beb8-944ab8c69ba1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326449531","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:29.545+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:47:29.545+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:47:29.545+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:47:29.545+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c64624c3-c637-4201-88e3-7b3627bbd0fb, expireMs=1750326479496] 09:48:58 policy-pap | [2025-06-19T09:47:29.545+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:47:29.545+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:47:29.546+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c64624c3-c637-4201-88e3-7b3627bbd0fb 09:48:58 policy-pap | [2025-06-19T09:47:29.554+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:47:29.554+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:47:29.554+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 09:48:58 policy-pap | [2025-06-19T09:47:54.170+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:54.170+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 09:48:58 policy-pap | [2025-06-19T09:47:54.170+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy abac 1.0.7 09:48:58 policy-pap | [2025-06-19T09:47:54.170+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-56bc6029-e683-4320-a6d7-f0316897aa5b opaGroup opa policies=0 09:48:58 policy-pap | [2025-06-19T09:47:54.170+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:54.170+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:54.177+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-19T09:47:54Z, user=policyadmin)] 09:48:58 policy-pap | [2025-06-19T09:47:54.186+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting 09:48:58 policy-pap | [2025-06-19T09:47:54.186+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting listener 09:48:58 policy-pap | [2025-06-19T09:47:54.186+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting timer 09:48:58 policy-pap | [2025-06-19T09:47:54.186+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=dc451126-590b-4085-93d5-dccf4e99bfd1, expireMs=1750326504186] 09:48:58 policy-pap | [2025-06-19T09:47:54.186+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate starting enqueue 09:48:58 policy-pap | [2025-06-19T09:47:54.186+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate started 09:48:58 policy-pap | [2025-06-19T09:47:54.187+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"dc451126-590b-4085-93d5-dccf4e99bfd1","timestampMs":1750326474170,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:54.197+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"dc451126-590b-4085-93d5-dccf4e99bfd1","timestampMs":1750326474170,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:54.197+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:54.201+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"source":"pap-1593551e-cd74-40f6-b32a-093109ad43dc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"dc451126-590b-4085-93d5-dccf4e99bfd1","timestampMs":1750326474170,"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 09:48:58 policy-pap | [2025-06-19T09:47:54.201+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:48:58 policy-pap | [2025-06-19T09:47:54.209+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"dc451126-590b-4085-93d5-dccf4e99bfd1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"1d943df2-9a51-4de8-894c-c4518a5ee104","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326474199","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:54.210+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping 09:48:58 policy-pap | [2025-06-19T09:47:54.210+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping enqueue 09:48:58 policy-pap | [2025-06-19T09:47:54.210+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping timer 09:48:58 policy-pap | [2025-06-19T09:47:54.210+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=dc451126-590b-4085-93d5-dccf4e99bfd1, expireMs=1750326504186] 09:48:58 policy-pap | [2025-06-19T09:47:54.210+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopping listener 09:48:58 policy-pap | [2025-06-19T09:47:54.210+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate stopped 09:48:58 policy-pap | [2025-06-19T09:47:54.217+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:48:58 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"dc451126-590b-4085-93d5-dccf4e99bfd1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-56bc6029-e683-4320-a6d7-f0316897aa5b","requestId":"1d943df2-9a51-4de8-894c-c4518a5ee104","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750326474199","deploymentInstanceInfo":""} 09:48:58 policy-pap | [2025-06-19T09:47:54.218+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id dc451126-590b-4085-93d5-dccf4e99bfd1 09:48:58 policy-pap | [2025-06-19T09:47:54.220+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b PdpUpdate successful 09:48:58 policy-pap | [2025-06-19T09:47:54.220+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-56bc6029-e683-4320-a6d7-f0316897aa5b has no more requests 09:48:58 policy-pap | [2025-06-19T09:47:54.220+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 09:48:58 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} 09:48:58 policy-pap | [2025-06-19T09:47:54.579+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 09:48:58 policy-pap | [2025-06-19T09:47:54.579+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: abac null 09:48:58 policy-pap | [2025-06-19T09:47:54.579+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed 09:48:58 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:48:58 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 09:48:58 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 09:48:58 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 09:48:58 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 09:48:58 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 09:48:58 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:48:58 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 09:48:58 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 09:48:58 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 09:48:58 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 09:48:58 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 09:48:58 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 09:48:58 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 09:48:58 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 09:48:58 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 09:48:58 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 09:48:58 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 09:48:58 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 09:48:58 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 09:48:58 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 09:48:58 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 09:48:58 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:48:58 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 09:48:58 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 09:48:58 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 09:48:58 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 09:48:58 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 09:48:58 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 09:48:58 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 09:48:58 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 09:48:58 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 09:48:58 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 09:48:58 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 09:48:58 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 09:48:58 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 09:48:58 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 09:48:58 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 09:48:58 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 09:48:58 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 09:48:58 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 09:48:58 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 09:48:58 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 09:48:58 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 09:48:58 policy-pap | [2025-06-19T09:47:58.343+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=01008446-0225-4952-be3e-34088d5cf19c, expireMs=1750326478342] 09:48:58 postgres | The files belonging to this database system will be owned by user "postgres". 09:48:58 postgres | This user must also own the server process. 09:48:58 postgres | 09:48:58 postgres | The database cluster will be initialized with locale "en_US.utf8". 09:48:58 postgres | The default database encoding has accordingly been set to "UTF8". 09:48:58 postgres | The default text search configuration will be set to "english". 09:48:58 postgres | 09:48:58 postgres | Data page checksums are disabled. 09:48:58 postgres | 09:48:58 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 09:48:58 postgres | creating subdirectories ... ok 09:48:58 postgres | selecting dynamic shared memory implementation ... posix 09:48:58 postgres | selecting default max_connections ... 100 09:48:58 postgres | selecting default shared_buffers ... 128MB 09:48:58 postgres | selecting default time zone ... Etc/UTC 09:48:58 postgres | creating configuration files ... ok 09:48:58 postgres | running bootstrap script ... ok 09:48:58 postgres | performing post-bootstrap initialization ... ok 09:48:58 postgres | syncing data to disk ... ok 09:48:58 postgres | 09:48:58 postgres | 09:48:58 postgres | Success. You can now start the database server using: 09:48:58 postgres | 09:48:58 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 09:48:58 postgres | 09:48:58 postgres | initdb: warning: enabling "trust" authentication for local connections 09:48:58 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 09:48:58 postgres | waiting for server to start....2025-06-19 09:42:43.432 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 09:48:58 postgres | 2025-06-19 09:42:43.434 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 09:48:58 postgres | 2025-06-19 09:42:43.440 UTC [51] LOG: database system was shut down at 2025-06-19 09:42:43 UTC 09:48:58 postgres | 2025-06-19 09:42:43.447 UTC [48] LOG: database system is ready to accept connections 09:48:58 postgres | done 09:48:58 postgres | server started 09:48:58 postgres | 09:48:58 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 09:48:58 postgres | 09:48:58 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 09:48:58 postgres | #!/bin/bash -xv 09:48:58 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 09:48:58 postgres | # 09:48:58 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 09:48:58 postgres | # you may not use this file except in compliance with the License. 09:48:58 postgres | # You may obtain a copy of the License at 09:48:58 postgres | # 09:48:58 postgres | # http://www.apache.org/licenses/LICENSE-2.0 09:48:58 postgres | # 09:48:58 postgres | # Unless required by applicable law or agreed to in writing, software 09:48:58 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 09:48:58 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 09:48:58 postgres | # See the License for the specific language governing permissions and 09:48:58 postgres | # limitations under the License. 09:48:58 postgres | 09:48:58 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 09:48:58 postgres | CREATE ROLE 09:48:58 postgres | 09:48:58 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | do 09:48:58 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 09:48:58 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 09:48:58 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 09:48:58 postgres | done 09:48:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 09:48:58 postgres | CREATE DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 09:48:58 postgres | ALTER DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 09:48:58 postgres | GRANT 09:48:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 09:48:58 postgres | CREATE DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 09:48:58 postgres | ALTER DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 09:48:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 09:48:58 postgres | GRANT 09:48:58 postgres | CREATE DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 09:48:58 postgres | ALTER DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 09:48:58 postgres | GRANT 09:48:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 09:48:58 postgres | CREATE DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 09:48:58 postgres | ALTER DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 09:48:58 postgres | GRANT 09:48:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 09:48:58 postgres | CREATE DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 09:48:58 postgres | ALTER DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 09:48:58 postgres | GRANT 09:48:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 09:48:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 09:48:58 postgres | CREATE DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 09:48:58 postgres | ALTER DATABASE 09:48:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 09:48:58 postgres | GRANT 09:48:58 postgres | 09:48:58 postgres | waiting for server to shut down...2025-06-19 09:42:44.709 UTC [48] LOG: received fast shutdown request 09:48:58 postgres | .2025-06-19 09:42:44.712 UTC [48] LOG: aborting any active transactions 09:48:58 postgres | 2025-06-19 09:42:44.714 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 09:48:58 postgres | 2025-06-19 09:42:44.715 UTC [49] LOG: shutting down 09:48:58 postgres | 2025-06-19 09:42:44.717 UTC [49] LOG: checkpoint starting: shutdown immediate 09:48:58 postgres | 2025-06-19 09:42:45.412 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.416 s, sync=0.152 s, total=0.698 s; sync files=1788, longest=0.029 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 09:48:58 postgres | 2025-06-19 09:42:45.430 UTC [48] LOG: database system is shut down 09:48:58 postgres | done 09:48:58 postgres | server stopped 09:48:58 postgres | 09:48:58 postgres | PostgreSQL init process complete; ready for start up. 09:48:58 postgres | 09:48:58 postgres | 2025-06-19 09:42:45.539 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 09:48:58 postgres | 2025-06-19 09:42:45.540 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 09:48:58 postgres | 2025-06-19 09:42:45.540 UTC [1] LOG: listening on IPv6 address "::", port 5432 09:48:58 postgres | 2025-06-19 09:42:45.543 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 09:48:58 postgres | 2025-06-19 09:42:45.678 UTC [101] LOG: database system was shut down at 2025-06-19 09:42:45 UTC 09:48:58 postgres | 2025-06-19 09:42:45.791 UTC [1] LOG: database system is ready to accept connections 09:48:58 postgres | 2025-06-19 09:47:45.722 UTC [99] LOG: checkpoint starting: time 09:48:58 postgres | 2025-06-19 09:48:50.564 UTC [99] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.806 s, sync=0.025 s, total=64.842 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/3150318, redo lsn=0/314DDE0 09:48:58 prometheus | time=2025-06-19T09:42:44.560Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 09:48:58 prometheus | time=2025-06-19T09:42:44.560Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 09:48:58 prometheus | time=2025-06-19T09:42:44.560Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 09:48:58 prometheus | time=2025-06-19T09:42:44.561Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 09:48:58 prometheus | time=2025-06-19T09:42:44.563Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 09:48:58 prometheus | time=2025-06-19T09:42:44.563Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 09:48:58 prometheus | time=2025-06-19T09:42:44.566Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 09:48:58 prometheus | time=2025-06-19T09:42:44.566Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 09:48:58 prometheus | time=2025-06-19T09:42:44.575Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 09:48:58 prometheus | time=2025-06-19T09:42:44.575Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.52µs 09:48:58 prometheus | time=2025-06-19T09:42:44.575Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 09:48:58 prometheus | time=2025-06-19T09:42:44.576Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=680.575µs 09:48:58 prometheus | time=2025-06-19T09:42:44.576Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=33.26µs wal_replay_duration=706.805µs wbl_replay_duration=290ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.52µs total_replay_duration=807.138µs 09:48:58 prometheus | time=2025-06-19T09:42:44.578Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 09:48:58 prometheus | time=2025-06-19T09:42:44.579Z level=INFO source=main.go:1290 msg="TSDB started" 09:48:58 prometheus | time=2025-06-19T09:42:44.579Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 09:48:58 prometheus | time=2025-06-19T09:42:44.580Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 09:48:58 prometheus | time=2025-06-19T09:42:44.580Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.52µs remote_storage=1.57µs web_handler=900ns query_engine=1.19µs scrape=257.465µs scrape_sd=156.954µs notify=147.213µs notify_sd=12.56µs rules=2.31µs tracing=4.48µs filename=/etc/prometheus/prometheus.yml totalDuration=1.215106ms 09:48:58 prometheus | time=2025-06-19T09:42:44.580Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 09:48:58 prometheus | time=2025-06-19T09:42:44.580Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 09:48:59 zookeeper | ===> User 09:48:59 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:48:59 zookeeper | ===> Configuring ... 09:48:59 zookeeper | ===> Running preflight checks ... 09:48:59 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 09:48:59 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 09:48:59 zookeeper | ===> Launching ... 09:48:59 zookeeper | ===> Launching zookeeper ... 09:48:59 zookeeper | [2025-06-19 09:42:49,798] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,801] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,801] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,801] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,801] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,802] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 09:48:59 zookeeper | [2025-06-19 09:42:49,802] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 09:48:59 zookeeper | [2025-06-19 09:42:49,803] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 09:48:59 zookeeper | [2025-06-19 09:42:49,803] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 09:48:59 zookeeper | [2025-06-19 09:42:49,804] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 09:48:59 zookeeper | [2025-06-19 09:42:49,805] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,805] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,805] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,805] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,805] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:48:59 zookeeper | [2025-06-19 09:42:49,805] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 09:48:59 zookeeper | [2025-06-19 09:42:49,818] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 09:48:59 zookeeper | [2025-06-19 09:42:49,820] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:48:59 zookeeper | [2025-06-19 09:42:49,820] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:48:59 zookeeper | [2025-06-19 09:42:49,823] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,832] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,834] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,835] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,836] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 09:48:59 zookeeper | [2025-06-19 09:42:49,837] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,837] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,838] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:48:59 zookeeper | [2025-06-19 09:42:49,838] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:48:59 zookeeper | [2025-06-19 09:42:49,839] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:48:59 zookeeper | [2025-06-19 09:42:49,839] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:48:59 zookeeper | [2025-06-19 09:42:49,839] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:48:59 zookeeper | [2025-06-19 09:42:49,839] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:48:59 zookeeper | [2025-06-19 09:42:49,839] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:48:59 zookeeper | [2025-06-19 09:42:49,839] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:48:59 zookeeper | [2025-06-19 09:42:49,841] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,842] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,855] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 09:48:59 zookeeper | [2025-06-19 09:42:49,855] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 09:48:59 zookeeper | [2025-06-19 09:42:49,855] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:49,884] INFO Logging initialized @519ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 09:48:59 zookeeper | [2025-06-19 09:42:49,946] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 09:48:59 zookeeper | [2025-06-19 09:42:49,946] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 09:48:59 zookeeper | [2025-06-19 09:42:49,962] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 09:48:59 zookeeper | [2025-06-19 09:42:50,000] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 09:48:59 zookeeper | [2025-06-19 09:42:50,000] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 09:48:59 zookeeper | [2025-06-19 09:42:50,001] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 09:48:59 zookeeper | [2025-06-19 09:42:50,005] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 09:48:59 zookeeper | [2025-06-19 09:42:50,015] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 09:48:59 zookeeper | [2025-06-19 09:42:50,025] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 09:48:59 zookeeper | [2025-06-19 09:42:50,025] INFO Started @666ms (org.eclipse.jetty.server.Server) 09:48:59 zookeeper | [2025-06-19 09:42:50,025] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 09:48:59 zookeeper | [2025-06-19 09:42:50,028] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 09:48:59 zookeeper | [2025-06-19 09:42:50,029] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 09:48:59 zookeeper | [2025-06-19 09:42:50,030] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:48:59 zookeeper | [2025-06-19 09:42:50,031] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:48:59 zookeeper | [2025-06-19 09:42:50,044] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:48:59 zookeeper | [2025-06-19 09:42:50,045] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:48:59 zookeeper | [2025-06-19 09:42:50,045] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 09:48:59 zookeeper | [2025-06-19 09:42:50,045] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 09:48:59 zookeeper | [2025-06-19 09:42:50,050] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 09:48:59 zookeeper | [2025-06-19 09:42:50,050] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:48:59 zookeeper | [2025-06-19 09:42:50,053] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 09:48:59 zookeeper | [2025-06-19 09:42:50,053] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:48:59 zookeeper | [2025-06-19 09:42:50,054] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:48:59 zookeeper | [2025-06-19 09:42:50,061] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 09:48:59 zookeeper | [2025-06-19 09:42:50,061] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 09:48:59 zookeeper | [2025-06-19 09:42:50,075] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 09:48:59 zookeeper | [2025-06-19 09:42:50,076] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 09:48:59 zookeeper | [2025-06-19 09:42:51,153] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 09:48:59 Tearing down containers... 09:48:59 Container policy-csit Stopping 09:48:59 Container policy-opa-pdp Stopping 09:48:59 Container grafana Stopping 09:48:59 Container policy-csit Stopped 09:48:59 Container policy-csit Removing 09:48:59 Container policy-csit Removed 09:48:59 Container grafana Stopped 09:48:59 Container grafana Removing 09:48:59 Container grafana Removed 09:48:59 Container prometheus Stopping 09:48:59 Container prometheus Stopped 09:48:59 Container prometheus Removing 09:48:59 Container prometheus Removed 09:49:09 Container policy-opa-pdp Stopped 09:49:09 Container policy-opa-pdp Removing 09:49:09 Container policy-opa-pdp Removed 09:49:09 Container policy-pap Stopping 09:49:19 Container policy-pap Stopped 09:49:19 Container policy-pap Removing 09:49:20 Container policy-pap Removed 09:49:20 Container kafka Stopping 09:49:20 Container policy-api Stopping 09:49:21 Container kafka Stopped 09:49:21 Container kafka Removing 09:49:21 Container kafka Removed 09:49:21 Container zookeeper Stopping 09:49:21 Container zookeeper Stopped 09:49:21 Container zookeeper Removing 09:49:21 Container zookeeper Removed 09:49:30 Container policy-api Stopped 09:49:30 Container policy-api Removing 09:49:30 Container policy-api Removed 09:49:30 Container policy-db-migrator Stopping 09:49:30 Container policy-db-migrator Stopped 09:49:30 Container policy-db-migrator Removing 09:49:30 Container policy-db-migrator Removed 09:49:30 Container postgres Stopping 09:49:30 Container postgres Stopped 09:49:30 Container postgres Removing 09:49:30 Container postgres Removed 09:49:30 Network compose_default Removing 09:49:30 Network compose_default Removed 09:49:30 $ ssh-agent -k 09:49:30 unset SSH_AUTH_SOCK; 09:49:30 unset SSH_AGENT_PID; 09:49:30 echo Agent pid 2108 killed; 09:49:30 [ssh-agent] Stopped. 09:49:31 Robot results publisher started... 09:49:31 INFO: Checking test criticality is deprecated and will be dropped in a future release! 09:49:31 -Parsing output xml: 09:49:31 Done! 09:49:31 -Copying log files to build dir: 09:49:31 Done! 09:49:31 -Assigning results to build: 09:49:31 Done! 09:49:31 -Checking thresholds: 09:49:31 Done! 09:49:31 Done publishing Robot results. 09:49:31 [PostBuildScript] - [INFO] Executing post build scripts. 09:49:31 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins7685179206029917943.sh 09:49:31 ---> sysstat.sh 09:49:32 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins10311651474024946976.sh 09:49:32 ---> package-listing.sh 09:49:32 ++ facter osfamily 09:49:32 ++ tr '[:upper:]' '[:lower:]' 09:49:32 + OS_FAMILY=debian 09:49:32 + workspace=/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 09:49:32 + START_PACKAGES=/tmp/packages_start.txt 09:49:32 + END_PACKAGES=/tmp/packages_end.txt 09:49:32 + DIFF_PACKAGES=/tmp/packages_diff.txt 09:49:32 + PACKAGES=/tmp/packages_start.txt 09:49:32 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 09:49:32 + PACKAGES=/tmp/packages_end.txt 09:49:32 + case "${OS_FAMILY}" in 09:49:32 + dpkg -l 09:49:32 + grep '^ii' 09:49:32 + '[' -f /tmp/packages_start.txt ']' 09:49:32 + '[' -f /tmp/packages_end.txt ']' 09:49:32 + diff /tmp/packages_start.txt /tmp/packages_end.txt 09:49:32 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 09:49:32 + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 09:49:32 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 09:49:32 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins8627751478194894315.sh 09:49:32 ---> capture-instance-metadata.sh 09:49:32 Setup pyenv: 09:49:32 system 09:49:32 3.8.13 09:49:32 3.9.13 09:49:32 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:49:32 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZzF from file:/tmp/.os_lf_venv 09:49:34 lf-activate-venv(): INFO: Installing: lftools 09:49:44 lf-activate-venv(): INFO: Adding /tmp/venv-ZZzF/bin to PATH 09:49:44 INFO: Running in OpenStack, capturing instance metadata 09:49:45 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15723681657532207135.sh 09:49:45 provisioning config files... 09:49:45 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/config251901117867828935tmp 09:49:45 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 09:49:45 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 09:49:45 [EnvInject] - Injecting environment variables from a build step. 09:49:45 [EnvInject] - Injecting as environment variables the properties content 09:49:45 SERVER_ID=logs 09:49:45 09:49:45 [EnvInject] - Variables injected successfully. 09:49:45 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins6240887445542119034.sh 09:49:45 ---> create-netrc.sh 09:49:45 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15813338939667377631.sh 09:49:45 ---> python-tools-install.sh 09:49:45 Setup pyenv: 09:49:45 system 09:49:45 3.8.13 09:49:45 3.9.13 09:49:45 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:49:45 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZzF from file:/tmp/.os_lf_venv 09:49:47 lf-activate-venv(): INFO: Installing: lftools 09:49:56 lf-activate-venv(): INFO: Adding /tmp/venv-ZZzF/bin to PATH 09:49:56 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins9031981903943680279.sh 09:49:56 ---> sudo-logs.sh 09:49:56 Archiving 'sudo' log.. 09:49:56 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins4847326646889262802.sh 09:49:56 ---> job-cost.sh 09:49:56 Setup pyenv: 09:49:56 system 09:49:56 3.8.13 09:49:56 3.9.13 09:49:56 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:49:56 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZzF from file:/tmp/.os_lf_venv 09:49:58 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 09:50:03 lf-activate-venv(): INFO: Adding /tmp/venv-ZZzF/bin to PATH 09:50:03 INFO: No Stack... 09:50:04 INFO: Retrieving Pricing Info for: v3-standard-8 09:50:04 INFO: Archiving Costs 09:50:04 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash -l /tmp/jenkins6482962895443464739.sh 09:50:04 ---> logs-deploy.sh 09:50:04 Setup pyenv: 09:50:04 system 09:50:04 3.8.13 09:50:04 3.9.13 09:50:04 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 09:50:04 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ZZzF from file:/tmp/.os_lf_venv 09:50:06 lf-activate-venv(): INFO: Installing: lftools 09:50:15 lf-activate-venv(): INFO: Adding /tmp/venv-ZZzF/bin to PATH 09:50:15 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-verify-opa-pdp/164 09:50:15 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 09:50:16 Archives upload complete. 09:50:17 INFO: archiving logs to Nexus 09:50:18 ---> uname -a: 09:50:18 Linux prd-ubuntu1804-docker-8c-8g-22297 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 09:50:18 09:50:18 09:50:18 ---> lscpu: 09:50:18 Architecture: x86_64 09:50:18 CPU op-mode(s): 32-bit, 64-bit 09:50:18 Byte Order: Little Endian 09:50:18 CPU(s): 8 09:50:18 On-line CPU(s) list: 0-7 09:50:18 Thread(s) per core: 1 09:50:18 Core(s) per socket: 1 09:50:18 Socket(s): 8 09:50:18 NUMA node(s): 1 09:50:18 Vendor ID: AuthenticAMD 09:50:18 CPU family: 23 09:50:18 Model: 49 09:50:18 Model name: AMD EPYC-Rome Processor 09:50:18 Stepping: 0 09:50:18 CPU MHz: 2799.998 09:50:18 BogoMIPS: 5599.99 09:50:18 Virtualization: AMD-V 09:50:18 Hypervisor vendor: KVM 09:50:18 Virtualization type: full 09:50:18 L1d cache: 32K 09:50:18 L1i cache: 32K 09:50:18 L2 cache: 512K 09:50:18 L3 cache: 16384K 09:50:18 NUMA node0 CPU(s): 0-7 09:50:18 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 09:50:18 09:50:18 09:50:18 ---> nproc: 09:50:18 8 09:50:18 09:50:18 09:50:18 ---> df -h: 09:50:18 Filesystem Size Used Avail Use% Mounted on 09:50:18 udev 16G 0 16G 0% /dev 09:50:18 tmpfs 3.2G 708K 3.2G 1% /run 09:50:18 /dev/vda1 155G 15G 141G 10% / 09:50:18 tmpfs 16G 0 16G 0% /dev/shm 09:50:18 tmpfs 5.0M 0 5.0M 0% /run/lock 09:50:18 tmpfs 16G 0 16G 0% /sys/fs/cgroup 09:50:18 /dev/vda15 105M 4.4M 100M 5% /boot/efi 09:50:18 tmpfs 3.2G 0 3.2G 0% /run/user/1001 09:50:18 09:50:18 09:50:18 ---> free -m: 09:50:18 total used free shared buff/cache available 09:50:18 Mem: 32167 868 24064 0 7234 30843 09:50:18 Swap: 1023 0 1023 09:50:18 09:50:18 09:50:18 ---> ip addr: 09:50:18 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 09:50:18 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 09:50:18 inet 127.0.0.1/8 scope host lo 09:50:18 valid_lft forever preferred_lft forever 09:50:18 inet6 ::1/128 scope host 09:50:18 valid_lft forever preferred_lft forever 09:50:18 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 09:50:18 link/ether fa:16:3e:28:88:28 brd ff:ff:ff:ff:ff:ff 09:50:18 inet 10.30.106.16/23 brd 10.30.107.255 scope global dynamic ens3 09:50:18 valid_lft 85781sec preferred_lft 85781sec 09:50:18 inet6 fe80::f816:3eff:fe28:8828/64 scope link 09:50:18 valid_lft forever preferred_lft forever 09:50:18 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 09:50:18 link/ether 02:42:ac:e5:f0:7a brd ff:ff:ff:ff:ff:ff 09:50:18 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 09:50:18 valid_lft forever preferred_lft forever 09:50:18 inet6 fe80::42:acff:fee5:f07a/64 scope link 09:50:18 valid_lft forever preferred_lft forever 09:50:18 09:50:18 09:50:18 ---> sar -b -r -n DEV: 09:50:18 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22297) 06/19/25 _x86_64_ (8 CPU) 09:50:18 09:50:18 09:40:02 LINUX RESTART (8 CPU) 09:50:18 09:50:18 09:41:01 tps rtps wtps bread/s bwrtn/s 09:50:18 09:42:02 166.94 19.76 147.18 2335.61 73541.48 09:50:18 09:43:01 634.34 3.22 631.13 429.01 214742.39 09:50:18 09:44:01 25.96 0.07 25.90 6.53 5648.93 09:50:18 09:45:01 5.43 0.00 5.43 0.00 128.25 09:50:18 09:46:01 62.41 0.18 62.22 22.93 10914.05 09:50:18 09:47:01 167.79 0.25 167.54 18.26 23182.14 09:50:18 09:48:01 7.81 0.97 6.85 20.66 167.41 09:50:18 09:49:01 30.83 0.08 30.74 11.20 488.19 09:50:18 09:50:01 47.58 1.20 46.38 88.79 2023.93 09:50:18 Average: 126.74 2.86 123.89 325.69 36432.12 09:50:18 09:50:18 09:41:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:50:18 09:42:02 27421424 31531188 5517796 16.75 96836 4267192 2640648 7.77 1067564 4043560 2386352 09:50:18 09:43:01 24478676 31039684 8460544 25.69 162028 6491904 6219976 18.30 1752012 6077392 57448 09:50:18 09:44:01 23434140 30085808 9505080 28.86 163728 6582000 7327720 21.56 2785804 6076540 564 09:50:18 09:45:01 23421224 30073140 9517996 28.90 163908 6582552 7553588 22.22 2797856 6075412 208 09:50:18 09:46:01 22983648 29967216 9955572 30.22 178356 6871288 7916188 23.29 2954632 6318132 19268 09:50:18 09:47:01 22765108 29957680 10174112 30.89 204140 7029292 7956708 23.41 3027604 6440164 12 09:50:18 09:48:01 22750208 29944972 10189012 30.93 204312 7030696 7932184 23.34 3045168 6434796 1224 09:50:18 09:49:01 22969500 30122104 9969720 30.27 204540 6994248 6961064 20.48 2885140 6390440 104 09:50:18 09:50:01 24655828 31594332 8283392 25.15 206152 6768212 1606016 4.73 1460980 6190184 11804 09:50:18 Average: 23875528 30479569 9063692 27.52 176000 6513043 6234899 18.34 2419640 6005180 275220 09:50:18 09:50:18 09:41:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:50:18 09:42:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:42:02 ens3 952.27 556.99 25023.12 50.33 0.00 0.00 0.00 0.00 09:50:18 09:42:02 lo 12.20 12.20 1.13 1.13 0.00 0.00 0.00 0.00 09:50:18 09:43:01 vethdab34d6 1.71 2.02 0.17 0.20 0.00 0.00 0.00 0.00 09:50:18 09:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:43:01 br-8928db6a1613 45.22 59.86 2.91 314.48 0.00 0.00 0.00 0.00 09:50:18 09:43:01 vethf66426b 0.39 0.58 0.02 0.03 0.00 0.00 0.00 0.00 09:50:18 09:44:01 vethdab34d6 9.50 8.12 1.18 1.21 0.00 0.00 0.00 0.00 09:50:18 09:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:44:01 br-8928db6a1613 0.37 0.27 0.02 0.02 0.00 0.00 0.00 0.00 09:50:18 09:44:01 vethf66426b 7.73 8.25 1.45 0.87 0.00 0.00 0.00 0.00 09:50:18 09:45:01 vethdab34d6 12.98 8.73 1.10 1.23 0.00 0.00 0.00 0.00 09:50:18 09:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:45:01 br-8928db6a1613 0.38 0.22 0.02 0.01 0.00 0.00 0.00 0.00 09:50:18 09:45:01 vethf66426b 6.33 9.28 1.44 0.71 0.00 0.00 0.00 0.00 09:50:18 09:46:01 vethdab34d6 15.63 10.85 1.59 1.60 0.00 0.00 0.00 0.00 09:50:18 09:46:01 docker0 89.97 118.35 4.80 1061.49 0.00 0.00 0.00 0.00 09:50:18 09:46:01 br-8928db6a1613 0.20 0.27 0.02 0.02 0.00 0.00 0.00 0.00 09:50:18 09:46:01 vethf66426b 106.52 109.02 12.92 25.74 0.00 0.00 0.00 0.00 09:50:18 09:47:01 vethdab34d6 14.66 10.06 1.40 1.45 0.00 0.00 0.00 0.00 09:50:18 09:47:01 docker0 42.66 57.41 3.67 295.69 0.00 0.00 0.00 0.00 09:50:18 09:47:01 br-8928db6a1613 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:47:01 vethf66426b 140.16 142.61 16.14 33.64 0.00 0.00 0.00 0.00 09:50:18 09:48:01 vethdab34d6 17.59 13.05 2.15 1.97 0.00 0.00 0.00 0.00 09:50:18 09:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:48:01 br-8928db6a1613 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:48:01 vethf66426b 592.59 593.17 64.74 142.51 0.00 0.00 0.00 0.01 09:50:18 09:49:01 vethdab34d6 13.81 9.15 1.16 1.30 0.00 0.00 0.00 0.00 09:50:18 09:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:49:01 br-8928db6a1613 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:49:01 vethf66426b 6.72 9.60 1.57 0.74 0.00 0.00 0.00 0.00 09:50:18 09:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:50:18 09:50:01 ens3 2038.71 1249.58 37435.60 196.33 0.00 0.00 0.00 0.00 09:50:18 09:50:01 lo 30.66 30.66 2.71 2.71 0.00 0.00 0.00 0.00 09:50:18 Average: docker0 14.76 19.56 0.94 151.07 0.00 0.00 0.00 0.00 09:50:18 Average: ens3 175.08 106.19 4029.36 13.16 0.00 0.00 0.00 0.00 09:50:18 Average: lo 2.89 2.89 0.25 0.25 0.00 0.00 0.00 0.00 09:50:18 09:50:18 09:50:18 ---> sar -P ALL: 09:50:18 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22297) 06/19/25 _x86_64_ (8 CPU) 09:50:18 09:50:18 09:40:02 LINUX RESTART (8 CPU) 09:50:18 09:50:18 09:41:01 CPU %user %nice %system %iowait %steal %idle 09:50:18 09:42:02 all 11.36 0.00 3.35 6.93 0.08 78.29 09:50:18 09:42:02 0 5.88 0.00 2.89 0.47 0.05 90.71 09:50:18 09:42:02 1 5.70 0.00 3.11 0.35 0.08 90.75 09:50:18 09:42:02 2 40.70 0.00 5.08 1.72 0.10 52.40 09:50:18 09:42:02 3 11.08 0.00 3.10 0.50 0.10 85.22 09:50:18 09:42:02 4 6.00 0.00 2.91 20.20 0.07 70.82 09:50:18 09:42:02 5 7.37 0.00 2.81 2.37 0.10 87.35 09:50:18 09:42:02 6 7.48 0.00 3.01 21.88 0.07 67.56 09:50:18 09:42:02 7 5.94 0.00 3.86 8.43 0.04 81.72 09:50:18 09:43:01 all 16.69 0.00 7.36 14.19 0.11 61.65 09:50:18 09:43:01 0 17.16 0.00 7.62 5.27 0.09 69.87 09:50:18 09:43:01 1 16.24 0.00 6.91 14.38 0.10 62.37 09:50:18 09:43:01 2 16.66 0.00 7.87 30.83 0.12 44.52 09:50:18 09:43:01 3 18.16 0.00 6.80 7.82 0.12 67.11 09:50:18 09:43:01 4 16.35 0.00 6.79 15.49 0.10 61.26 09:50:18 09:43:01 5 17.46 0.00 6.98 8.42 0.09 67.06 09:50:18 09:43:01 6 16.95 0.00 7.95 9.90 0.10 65.09 09:50:18 09:43:01 7 14.53 0.00 8.01 21.58 0.10 55.77 09:50:18 09:44:01 all 21.06 0.00 2.01 0.37 0.08 76.49 09:50:18 09:44:01 0 26.77 0.00 2.11 0.00 0.08 71.04 09:50:18 09:44:01 1 25.24 0.00 2.58 0.20 0.07 71.91 09:50:18 09:44:01 2 20.20 0.00 1.87 0.00 0.07 77.86 09:50:18 09:44:01 3 23.99 0.00 2.16 1.97 0.08 71.79 09:50:18 09:44:01 4 17.33 0.00 2.03 0.35 0.08 80.21 09:50:18 09:44:01 5 19.99 0.00 1.78 0.05 0.10 78.08 09:50:18 09:44:01 6 16.89 0.00 1.42 0.07 0.07 81.56 09:50:18 09:44:01 7 18.07 0.00 2.06 0.27 0.07 79.54 09:50:18 09:45:01 all 0.91 0.00 0.16 0.03 0.04 98.86 09:50:18 09:45:01 0 0.87 0.00 0.18 0.02 0.05 98.88 09:50:18 09:45:01 1 0.77 0.00 0.20 0.03 0.05 98.95 09:50:18 09:45:01 2 0.58 0.00 0.12 0.02 0.02 99.27 09:50:18 09:45:01 3 0.82 0.00 0.13 0.00 0.07 98.98 09:50:18 09:45:01 4 2.11 0.00 0.15 0.00 0.03 97.71 09:50:18 09:45:01 5 0.95 0.00 0.22 0.03 0.05 98.75 09:50:18 09:45:01 6 0.57 0.00 0.15 0.00 0.05 99.23 09:50:18 09:45:01 7 0.63 0.00 0.15 0.12 0.05 99.05 09:50:18 09:46:01 all 3.74 0.00 1.04 0.42 0.05 94.74 09:50:18 09:46:01 0 4.36 0.00 0.77 0.00 0.05 94.82 09:50:18 09:46:01 1 3.87 0.00 1.24 0.08 0.05 94.76 09:50:18 09:46:01 2 3.50 0.00 1.04 0.38 0.05 95.03 09:50:18 09:46:01 3 2.98 0.00 1.21 1.21 0.03 94.58 09:50:18 09:46:01 4 3.09 0.00 0.74 0.05 0.05 96.07 09:50:18 09:46:01 5 2.85 0.00 1.05 0.10 0.07 95.93 09:50:18 09:46:01 6 4.25 0.00 1.02 0.22 0.05 94.46 09:50:18 09:46:01 7 5.05 0.00 1.24 1.32 0.07 92.33 09:50:18 09:47:01 all 7.97 0.00 2.23 0.98 0.07 88.75 09:50:18 09:47:01 0 15.21 0.00 2.64 1.04 0.07 81.05 09:50:18 09:47:01 1 8.79 0.00 2.84 0.30 0.07 88.00 09:50:18 09:47:01 2 10.18 0.00 1.95 1.89 0.08 85.90 09:50:18 09:47:01 3 7.32 0.00 2.11 0.12 0.05 90.41 09:50:18 09:47:01 4 6.12 0.00 1.68 0.76 0.07 91.38 09:50:18 09:47:01 5 5.90 0.00 2.46 1.93 0.08 89.63 09:50:18 09:47:01 6 5.25 0.00 2.27 0.76 0.07 91.66 09:50:18 09:47:01 7 4.97 0.00 1.88 1.06 0.08 92.00 09:50:18 09:48:01 all 3.81 0.00 0.76 0.04 0.05 95.35 09:50:18 09:48:01 0 4.74 0.00 0.78 0.02 0.07 94.39 09:50:18 09:48:01 1 4.59 0.00 0.62 0.00 0.07 94.72 09:50:18 09:48:01 2 3.10 0.00 0.68 0.03 0.03 96.14 09:50:18 09:48:01 3 3.05 0.00 0.52 0.02 0.03 96.38 09:50:18 09:48:01 4 4.44 0.00 0.58 0.00 0.02 94.96 09:50:18 09:48:01 5 3.72 0.00 0.62 0.00 0.07 95.59 09:50:18 09:48:01 6 2.86 0.00 1.63 0.18 0.05 95.28 09:50:18 09:48:01 7 3.97 0.00 0.62 0.03 0.05 95.33 09:50:18 09:49:01 all 1.47 0.00 0.56 0.06 0.05 97.86 09:50:18 09:49:01 0 1.32 0.00 0.87 0.13 0.05 97.63 09:50:18 09:49:01 1 1.02 0.00 0.57 0.00 0.05 98.36 09:50:18 09:49:01 2 1.47 0.00 0.52 0.20 0.03 97.78 09:50:18 09:49:01 3 1.24 0.00 0.49 0.05 0.08 98.14 09:50:18 09:49:01 4 1.09 0.00 0.35 0.03 0.03 98.50 09:50:18 09:49:01 5 1.80 0.00 0.68 0.05 0.08 97.38 09:50:18 09:49:01 6 2.04 0.00 0.57 0.02 0.07 97.31 09:50:18 09:49:01 7 1.75 0.00 0.52 0.02 0.03 97.68 09:50:18 09:50:01 all 6.07 0.00 0.76 0.23 0.03 92.91 09:50:18 09:50:01 0 0.70 0.00 0.43 0.03 0.02 98.82 09:50:18 09:50:01 1 3.51 0.00 0.65 0.07 0.03 95.74 09:50:18 09:50:01 2 1.22 0.00 0.62 1.30 0.03 96.82 09:50:18 09:50:01 3 1.44 0.00 0.53 0.10 0.02 97.91 09:50:18 09:50:01 4 14.69 0.00 0.85 0.12 0.03 84.31 09:50:18 09:50:01 5 18.78 0.00 1.32 0.13 0.07 79.70 09:50:18 09:50:01 6 7.02 0.00 1.15 0.02 0.02 91.79 09:50:18 09:50:01 7 1.24 0.00 0.43 0.05 0.03 98.25 09:50:18 Average: all 8.08 0.00 2.00 2.53 0.06 87.33 09:50:18 Average: 0 8.53 0.00 2.01 0.76 0.06 88.64 09:50:18 Average: 1 7.72 0.00 2.06 1.67 0.06 88.49 09:50:18 Average: 2 10.79 0.00 2.17 3.95 0.06 83.02 09:50:18 Average: 3 7.74 0.00 1.88 1.29 0.07 89.02 09:50:18 Average: 4 7.87 0.00 1.76 4.00 0.05 86.31 09:50:18 Average: 5 8.73 0.00 1.98 1.43 0.08 87.78 09:50:18 Average: 6 7.01 0.00 2.11 3.61 0.06 87.21 09:50:18 Average: 7 6.21 0.00 2.04 3.54 0.06 88.14 09:50:18 09:50:18 09:50:18