11:45:12 Started by timer 11:45:12 Running as SYSTEM 11:45:12 [EnvInject] - Loading node environment variables. 11:45:12 Building remotely on prd-ubuntu1804-docker-8c-8g-21811 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp 11:45:12 [ssh-agent] Looking for ssh-agent implementation... 11:45:12 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:45:12 $ ssh-agent 11:45:12 SSH_AUTH_SOCK=/tmp/ssh-htRDVA9ycgkL/agent.2041 11:45:12 SSH_AGENT_PID=2043 11:45:12 [ssh-agent] Started. 11:45:12 Running ssh-add (command line suppressed) 11:45:12 Identity added: /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_14197888195368131858.key (/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_14197888195368131858.key) 11:45:12 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 11:45:12 The recommended git tool is: NONE 11:45:14 using credential onap-jenkins-ssh 11:45:14 Wiping out workspace first. 11:45:14 Cloning the remote Git repository 11:45:14 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 11:45:14 > git init /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp # timeout=10 11:45:14 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 11:45:14 > git --version # timeout=10 11:45:14 > git --version # 'git version 2.17.1' 11:45:14 using GIT_SSH to set credentials Gerrit user 11:45:14 Verifying host key using manually-configured host key entries 11:45:14 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 11:45:14 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 11:45:14 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:45:15 Avoid second fetch 11:45:15 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 11:45:15 Checking out Revision ed38a50541249063daf2cfb00b312fb173adeace (refs/remotes/origin/master) 11:45:15 > git config core.sparsecheckout # timeout=10 11:45:15 > git checkout -f ed38a50541249063daf2cfb00b312fb173adeace # timeout=30 11:45:15 Commit message: "Remove python from the java app docker images" 11:45:15 > git rev-list --no-walk 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=10 11:45:18 provisioning config files... 11:45:18 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:45:18 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:45:18 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins4779490974570966169.sh 11:45:18 ---> python-tools-install.sh 11:45:18 Setup pyenv: 11:45:18 * system (set by /opt/pyenv/version) 11:45:18 * 3.8.13 (set by /opt/pyenv/version) 11:45:18 * 3.9.13 (set by /opt/pyenv/version) 11:45:18 * 3.10.6 (set by /opt/pyenv/version) 11:45:23 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-dI98 11:45:23 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:45:27 lf-activate-venv(): INFO: Installing: lftools 11:45:52 lf-activate-venv(): INFO: Adding /tmp/venv-dI98/bin to PATH 11:45:52 Generating Requirements File 11:46:12 Python 3.10.6 11:46:13 pip 25.1.1 from /tmp/venv-dI98/lib/python3.10/site-packages/pip (python 3.10) 11:46:13 appdirs==1.4.4 11:46:13 argcomplete==3.6.2 11:46:13 aspy.yaml==1.3.0 11:46:13 attrs==25.3.0 11:46:13 autopage==0.5.2 11:46:13 beautifulsoup4==4.13.4 11:46:13 boto3==1.38.37 11:46:13 botocore==1.38.37 11:46:13 bs4==0.0.2 11:46:13 cachetools==5.5.2 11:46:13 certifi==2025.6.15 11:46:13 cffi==1.17.1 11:46:13 cfgv==3.4.0 11:46:13 chardet==5.2.0 11:46:13 charset-normalizer==3.4.2 11:46:13 click==8.2.1 11:46:13 cliff==4.10.0 11:46:13 cmd2==2.6.1 11:46:13 cryptography==3.3.2 11:46:13 debtcollector==3.0.0 11:46:13 decorator==5.2.1 11:46:13 defusedxml==0.7.1 11:46:13 Deprecated==1.2.18 11:46:13 distlib==0.3.9 11:46:13 dnspython==2.7.0 11:46:13 docker==7.1.0 11:46:13 dogpile.cache==1.4.0 11:46:13 durationpy==0.10 11:46:13 email_validator==2.2.0 11:46:13 filelock==3.18.0 11:46:13 future==1.0.0 11:46:13 gitdb==4.0.12 11:46:13 GitPython==3.1.44 11:46:13 google-auth==2.40.3 11:46:13 httplib2==0.22.0 11:46:13 identify==2.6.12 11:46:13 idna==3.10 11:46:13 importlib-resources==1.5.0 11:46:13 iso8601==2.1.0 11:46:13 Jinja2==3.1.6 11:46:13 jmespath==1.0.1 11:46:13 jsonpatch==1.33 11:46:13 jsonpointer==3.0.0 11:46:13 jsonschema==4.24.0 11:46:13 jsonschema-specifications==2025.4.1 11:46:13 keystoneauth1==5.11.1 11:46:13 kubernetes==33.1.0 11:46:13 lftools==0.37.13 11:46:13 lxml==5.4.0 11:46:13 MarkupSafe==3.0.2 11:46:13 msgpack==1.1.1 11:46:13 multi_key_dict==2.0.3 11:46:13 munch==4.0.0 11:46:13 netaddr==1.3.0 11:46:13 niet==1.4.2 11:46:13 nodeenv==1.9.1 11:46:13 oauth2client==4.1.3 11:46:13 oauthlib==3.2.2 11:46:13 openstacksdk==4.6.0 11:46:13 os-client-config==2.1.0 11:46:13 os-service-types==1.7.0 11:46:13 osc-lib==4.0.2 11:46:13 oslo.config==9.8.0 11:46:13 oslo.context==6.0.0 11:46:13 oslo.i18n==6.5.1 11:46:13 oslo.log==7.1.0 11:46:13 oslo.serialization==5.7.0 11:46:13 oslo.utils==9.0.0 11:46:13 packaging==25.0 11:46:13 pbr==6.1.1 11:46:13 platformdirs==4.3.8 11:46:13 prettytable==3.16.0 11:46:13 psutil==7.0.0 11:46:13 pyasn1==0.6.1 11:46:13 pyasn1_modules==0.4.2 11:46:13 pycparser==2.22 11:46:13 pygerrit2==2.0.15 11:46:13 PyGithub==2.6.1 11:46:13 PyJWT==2.10.1 11:46:13 PyNaCl==1.5.0 11:46:13 pyparsing==2.4.7 11:46:13 pyperclip==1.9.0 11:46:13 pyrsistent==0.20.0 11:46:13 python-cinderclient==9.7.0 11:46:13 python-dateutil==2.9.0.post0 11:46:13 python-heatclient==4.2.0 11:46:13 python-jenkins==1.8.2 11:46:13 python-keystoneclient==5.6.0 11:46:13 python-magnumclient==4.8.1 11:46:13 python-openstackclient==8.1.0 11:46:13 python-swiftclient==4.8.0 11:46:13 PyYAML==6.0.2 11:46:13 referencing==0.36.2 11:46:13 requests==2.32.4 11:46:13 requests-oauthlib==2.0.0 11:46:13 requestsexceptions==1.4.0 11:46:13 rfc3986==2.0.0 11:46:13 rpds-py==0.25.1 11:46:13 rsa==4.9.1 11:46:13 ruamel.yaml==0.18.14 11:46:13 ruamel.yaml.clib==0.2.12 11:46:13 s3transfer==0.13.0 11:46:13 simplejson==3.20.1 11:46:13 six==1.17.0 11:46:13 smmap==5.0.2 11:46:13 soupsieve==2.7 11:46:13 stevedore==5.4.1 11:46:13 tabulate==0.9.0 11:46:13 toml==0.10.2 11:46:13 tomlkit==0.13.3 11:46:13 tqdm==4.67.1 11:46:13 typing_extensions==4.14.0 11:46:13 tzdata==2025.2 11:46:13 urllib3==1.26.20 11:46:13 virtualenv==20.31.2 11:46:13 wcwidth==0.2.13 11:46:13 websocket-client==1.8.0 11:46:13 wrapt==1.17.2 11:46:13 xdg==6.0.0 11:46:13 xmltodict==0.14.2 11:46:13 yq==3.4.3 11:46:13 [EnvInject] - Injecting environment variables from a build step. 11:46:13 [EnvInject] - Injecting as environment variables the properties content 11:46:13 SET_JDK_VERSION=openjdk17 11:46:13 GIT_URL="git://cloud.onap.org/mirror" 11:46:13 11:46:13 [EnvInject] - Variables injected successfully. 11:46:13 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh /tmp/jenkins15929837088930121518.sh 11:46:13 ---> update-java-alternatives.sh 11:46:13 ---> Updating Java version 11:46:13 ---> Ubuntu/Debian system detected 11:46:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 11:46:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 11:46:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 11:46:14 openjdk version "17.0.4" 2022-07-19 11:46:14 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 11:46:14 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 11:46:14 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 11:46:14 [EnvInject] - Injecting environment variables from a build step. 11:46:14 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 11:46:14 [EnvInject] - Variables injected successfully. 11:46:14 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh -xe /tmp/jenkins10888155307536913732.sh 11:46:14 + /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/run-project-csit.sh policy-opa-pdp 11:46:14 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 11:46:14 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 11:46:14 Configure a credential helper to remove this warning. See 11:46:14 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 11:46:14 11:46:14 Login Succeeded 11:46:14 docker: 'compose' is not a docker command. 11:46:14 See 'docker --help' 11:46:14 Docker Compose Plugin not installed. Installing now... 11:46:14 % Total % Received % Xferd Average Speed Time Time Time Current 11:46:14 Dload Upload Total Spent Left Speed 11:46:14 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 11:46:15 100 60.2M 100 60.2M 0 0 68.9M 0 --:--:-- --:--:-- --:--:-- 68.9M 11:46:15 Setting project configuration for: policy-opa-pdp 11:46:15 Configuring docker compose... 11:46:17 Starting opa-pdp using postgres + Grafana/Prometheus 11:46:17 policy-db-migrator Pulling 11:46:17 prometheus Pulling 11:46:17 opa-pdp Pulling 11:46:17 pap Pulling 11:46:17 grafana Pulling 11:46:17 zookeeper Pulling 11:46:17 kafka Pulling 11:46:17 api Pulling 11:46:17 postgres Pulling 11:46:17 da9db072f522 Pulling fs layer 11:46:17 110a13bd01fb Pulling fs layer 11:46:17 12cf1ed9c784 Pulling fs layer 11:46:17 d4108afce2f7 Pulling fs layer 11:46:17 07255172bfd8 Pulling fs layer 11:46:17 22c948928e79 Pulling fs layer 11:46:17 e92d65bf8445 Pulling fs layer 11:46:17 7910fddefabc Pulling fs layer 11:46:17 d4108afce2f7 Waiting 11:46:17 22c948928e79 Waiting 11:46:17 7910fddefabc Waiting 11:46:17 07255172bfd8 Waiting 11:46:17 e92d65bf8445 Waiting 11:46:17 da9db072f522 Pulling fs layer 11:46:17 96e38c8865ba Pulling fs layer 11:46:17 5e06c6bed798 Pulling fs layer 11:46:17 684be6598fc9 Pulling fs layer 11:46:17 0d92cad902ba Pulling fs layer 11:46:17 96e38c8865ba Waiting 11:46:17 dcc0c3b2850c Pulling fs layer 11:46:17 eb7cda286a15 Pulling fs layer 11:46:17 5e06c6bed798 Waiting 11:46:17 684be6598fc9 Waiting 11:46:17 0d92cad902ba Waiting 11:46:17 eb7cda286a15 Waiting 11:46:17 da9db072f522 Downloading [> ] 48.06kB/3.624MB 11:46:17 da9db072f522 Downloading [> ] 48.06kB/3.624MB 11:46:17 da9db072f522 Pulling fs layer 11:46:17 96e38c8865ba Pulling fs layer 11:46:17 e5d7009d9e55 Pulling fs layer 11:46:17 1ec5fb03eaee Pulling fs layer 11:46:17 d3165a332ae3 Pulling fs layer 11:46:17 da9db072f522 Downloading [> ] 48.06kB/3.624MB 11:46:17 1ec5fb03eaee Waiting 11:46:17 96e38c8865ba Waiting 11:46:17 e5d7009d9e55 Waiting 11:46:17 c124ba1a8b26 Pulling fs layer 11:46:17 6394804c2196 Pulling fs layer 11:46:17 d3165a332ae3 Waiting 11:46:17 6394804c2196 Waiting 11:46:17 c124ba1a8b26 Waiting 11:46:17 12cf1ed9c784 Downloading [> ] 146.4kB/14.64MB 11:46:17 110a13bd01fb Downloading [> ] 539.6kB/71.86MB 11:46:17 2d429b9e73a6 Pulling fs layer 11:46:17 46eab5b44a35 Pulling fs layer 11:46:17 c4d302cc468d Pulling fs layer 11:46:17 01e0882c90d9 Pulling fs layer 11:46:17 531ee2cf3c0c Pulling fs layer 11:46:17 ed54a7dee1d8 Pulling fs layer 11:46:17 12c5c803443f Pulling fs layer 11:46:17 e27c75a98748 Pulling fs layer 11:46:17 e73cb4a42719 Pulling fs layer 11:46:17 a83b68436f09 Pulling fs layer 11:46:17 46eab5b44a35 Waiting 11:46:17 787d6bee9571 Pulling fs layer 11:46:17 13ff0988aaea Pulling fs layer 11:46:17 01e0882c90d9 Waiting 11:46:17 2d429b9e73a6 Waiting 11:46:17 c4d302cc468d Waiting 11:46:17 4b82842ab819 Pulling fs layer 11:46:17 7e568a0dc8fb Pulling fs layer 11:46:17 ed54a7dee1d8 Waiting 11:46:17 787d6bee9571 Waiting 11:46:17 a83b68436f09 Waiting 11:46:17 12c5c803443f Waiting 11:46:17 13ff0988aaea Waiting 11:46:17 e73cb4a42719 Waiting 11:46:17 7e568a0dc8fb Waiting 11:46:17 4b82842ab819 Waiting 11:46:17 e27c75a98748 Waiting 11:46:17 eca0188f477e Pulling fs layer 11:46:17 e444bcd4d577 Pulling fs layer 11:46:17 eabd8714fec9 Pulling fs layer 11:46:17 45fd2fec8a19 Pulling fs layer 11:46:17 8f10199ed94b Pulling fs layer 11:46:17 eca0188f477e Waiting 11:46:17 f963a77d2726 Pulling fs layer 11:46:17 e444bcd4d577 Waiting 11:46:17 f3a82e9f1761 Pulling fs layer 11:46:17 45fd2fec8a19 Waiting 11:46:17 79161a3f5362 Pulling fs layer 11:46:17 9c266ba63f51 Pulling fs layer 11:46:17 8f10199ed94b Waiting 11:46:17 2e8a7df9c2ee Pulling fs layer 11:46:17 f3a82e9f1761 Waiting 11:46:17 79161a3f5362 Waiting 11:46:17 9c266ba63f51 Waiting 11:46:17 2e8a7df9c2ee Waiting 11:46:17 10f05dd8b1db Pulling fs layer 11:46:17 41dac8b43ba6 Pulling fs layer 11:46:17 71a9f6a9ab4d Pulling fs layer 11:46:17 da3ed5db7103 Pulling fs layer 11:46:17 c955f6e31a04 Pulling fs layer 11:46:17 10f05dd8b1db Waiting 11:46:17 c955f6e31a04 Waiting 11:46:17 da3ed5db7103 Waiting 11:46:17 f18232174bc9 Pulling fs layer 11:46:17 e60d9caeb0b8 Pulling fs layer 11:46:17 f61a19743345 Pulling fs layer 11:46:17 8af57d8c9f49 Pulling fs layer 11:46:17 c53a11b7c6fc Pulling fs layer 11:46:17 e032d0a5e409 Pulling fs layer 11:46:17 c49e0ee60bfb Pulling fs layer 11:46:17 e60d9caeb0b8 Waiting 11:46:17 384497dbce3b Pulling fs layer 11:46:17 055b9255fa03 Pulling fs layer 11:46:17 b176d7edde70 Pulling fs layer 11:46:17 8af57d8c9f49 Waiting 11:46:17 c53a11b7c6fc Waiting 11:46:17 f61a19743345 Waiting 11:46:17 f18232174bc9 Waiting 11:46:17 e032d0a5e409 Waiting 11:46:17 384497dbce3b Waiting 11:46:17 c49e0ee60bfb Waiting 11:46:17 1e017ebebdbd Pulling fs layer 11:46:17 55f2b468da67 Pulling fs layer 11:46:17 82bfc142787e Pulling fs layer 11:46:17 46baca71a4ef Pulling fs layer 11:46:17 b0e0ef7895f4 Pulling fs layer 11:46:17 c0c90eeb8aca Pulling fs layer 11:46:17 1e017ebebdbd Waiting 11:46:17 46baca71a4ef Waiting 11:46:17 5cfb27c10ea5 Pulling fs layer 11:46:17 40a5eed61bb0 Pulling fs layer 11:46:17 e040ea11fa10 Pulling fs layer 11:46:17 09d5a3f70313 Pulling fs layer 11:46:17 c0c90eeb8aca Waiting 11:46:17 b0e0ef7895f4 Waiting 11:46:17 356f5c2c843b Pulling fs layer 11:46:17 5cfb27c10ea5 Waiting 11:46:17 55f2b468da67 Waiting 11:46:17 09d5a3f70313 Waiting 11:46:17 e040ea11fa10 Waiting 11:46:17 356f5c2c843b Waiting 11:46:17 82bfc142787e Waiting 11:46:17 40a5eed61bb0 Waiting 11:46:17 da9db072f522 Download complete 11:46:17 da9db072f522 Download complete 11:46:17 da9db072f522 Download complete 11:46:17 da9db072f522 Extracting [> ] 65.54kB/3.624MB 11:46:17 da9db072f522 Extracting [> ] 65.54kB/3.624MB 11:46:17 da9db072f522 Extracting [> ] 65.54kB/3.624MB 11:46:17 d4108afce2f7 Downloading [==================================================>] 1.073kB/1.073kB 11:46:17 d4108afce2f7 Verifying Checksum 11:46:17 d4108afce2f7 Download complete 11:46:17 07255172bfd8 Downloading [============================> ] 3.003kB/5.24kB 11:46:17 07255172bfd8 Downloading [==================================================>] 5.24kB/5.24kB 11:46:17 07255172bfd8 Verifying Checksum 11:46:17 07255172bfd8 Download complete 11:46:17 22c948928e79 Downloading [==================================================>] 1.031kB/1.031kB 11:46:17 22c948928e79 Download complete 11:46:17 12cf1ed9c784 Downloading [===============================> ] 9.141MB/14.64MB 11:46:17 110a13bd01fb Downloading [======> ] 9.19MB/71.86MB 11:46:17 e92d65bf8445 Downloading [==================================================>] 1.034kB/1.034kB 11:46:17 e92d65bf8445 Verifying Checksum 11:46:17 e92d65bf8445 Download complete 11:46:17 7910fddefabc Downloading [=======> ] 3.002kB/19.51kB 11:46:17 7910fddefabc Downloading [==================================================>] 19.51kB/19.51kB 11:46:17 7910fddefabc Verifying Checksum 11:46:17 7910fddefabc Download complete 11:46:17 12cf1ed9c784 Verifying Checksum 11:46:17 12cf1ed9c784 Download complete 11:46:18 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 11:46:18 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 11:46:18 da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 11:46:18 5e06c6bed798 Downloading [==================================================>] 296B/296B 11:46:18 5e06c6bed798 Verifying Checksum 11:46:18 5e06c6bed798 Download complete 11:46:18 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 11:46:18 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 11:46:18 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 11:46:18 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 11:46:18 684be6598fc9 Verifying Checksum 11:46:18 684be6598fc9 Download complete 11:46:18 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 11:46:18 0d92cad902ba Verifying Checksum 11:46:18 0d92cad902ba Download complete 11:46:18 110a13bd01fb Downloading [=================> ] 25.41MB/71.86MB 11:46:18 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 11:46:18 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 11:46:18 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 11:46:18 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 11:46:18 96e38c8865ba Downloading [=====> ] 7.568MB/71.91MB 11:46:18 96e38c8865ba Downloading [=====> ] 7.568MB/71.91MB 11:46:18 9fa9226be034 Pulling fs layer 11:46:18 1617e25568b2 Pulling fs layer 11:46:18 6ac0e4adf315 Pulling fs layer 11:46:18 f3b09c502777 Pulling fs layer 11:46:18 408012a7b118 Pulling fs layer 11:46:18 44986281b8b9 Pulling fs layer 11:46:18 bf70c5107ab5 Pulling fs layer 11:46:18 1ccde423731d Pulling fs layer 11:46:18 9fa9226be034 Waiting 11:46:18 7221d93db8a9 Pulling fs layer 11:46:18 1617e25568b2 Waiting 11:46:18 7df673c7455d Pulling fs layer 11:46:18 6ac0e4adf315 Waiting 11:46:18 f3b09c502777 Waiting 11:46:18 408012a7b118 Waiting 11:46:18 44986281b8b9 Waiting 11:46:18 bf70c5107ab5 Waiting 11:46:18 1ccde423731d Waiting 11:46:18 7df673c7455d Waiting 11:46:18 7221d93db8a9 Waiting 11:46:18 da9db072f522 Pull complete 11:46:18 da9db072f522 Pull complete 11:46:18 da9db072f522 Pull complete 11:46:18 110a13bd01fb Downloading [============================> ] 41.09MB/71.86MB 11:46:18 dcc0c3b2850c Downloading [==> ] 4.324MB/76.12MB 11:46:18 96e38c8865ba Downloading [==========> ] 15.14MB/71.91MB 11:46:18 96e38c8865ba Downloading [==========> ] 15.14MB/71.91MB 11:46:18 110a13bd01fb Downloading [======================================> ] 55.69MB/71.86MB 11:46:18 dcc0c3b2850c Downloading [======> ] 10.27MB/76.12MB 11:46:18 96e38c8865ba Downloading [=================> ] 24.87MB/71.91MB 11:46:18 96e38c8865ba Downloading [=================> ] 24.87MB/71.91MB 11:46:18 110a13bd01fb Downloading [=================================================> ] 70.83MB/71.86MB 11:46:18 110a13bd01fb Download complete 11:46:18 dcc0c3b2850c Downloading [=============> ] 20.54MB/76.12MB 11:46:18 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 11:46:18 eb7cda286a15 Download complete 11:46:18 e5d7009d9e55 Downloading [==================================================>] 295B/295B 11:46:18 e5d7009d9e55 Verifying Checksum 11:46:18 e5d7009d9e55 Download complete 11:46:18 96e38c8865ba Downloading [========================> ] 35.68MB/71.91MB 11:46:18 96e38c8865ba Downloading [========================> ] 35.68MB/71.91MB 11:46:18 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 11:46:18 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 11:46:18 1ec5fb03eaee Download complete 11:46:18 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 11:46:18 d3165a332ae3 Verifying Checksum 11:46:18 d3165a332ae3 Download complete 11:46:18 dcc0c3b2850c Downloading [=======================> ] 36.22MB/76.12MB 11:46:18 110a13bd01fb Extracting [> ] 557.1kB/71.86MB 11:46:18 96e38c8865ba Downloading [==================================> ] 49.74MB/71.91MB 11:46:18 96e38c8865ba Downloading [==================================> ] 49.74MB/71.91MB 11:46:18 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 11:46:18 f90c8eb4724c Pulling fs layer 11:46:18 2b1b549e99de Pulling fs layer 11:46:18 547372ea8ffa Pulling fs layer 11:46:18 65d25c0f02f3 Pulling fs layer 11:46:18 90dd78f85976 Pulling fs layer 11:46:18 f90c8eb4724c Waiting 11:46:18 2b1b549e99de Waiting 11:46:18 547372ea8ffa Waiting 11:46:18 4f4fb700ef54 Pulling fs layer 11:46:18 65d25c0f02f3 Waiting 11:46:18 4f4fb700ef54 Waiting 11:46:18 dcc0c3b2850c Downloading [=================================> ] 50.82MB/76.12MB 11:46:18 110a13bd01fb Extracting [===> ] 5.014MB/71.86MB 11:46:18 96e38c8865ba Downloading [=============================================> ] 64.88MB/71.91MB 11:46:18 96e38c8865ba Downloading [=============================================> ] 64.88MB/71.91MB 11:46:18 c124ba1a8b26 Downloading [=> ] 3.243MB/91.87MB 11:46:18 96e38c8865ba Verifying Checksum 11:46:18 96e38c8865ba Download complete 11:46:18 96e38c8865ba Verifying Checksum 11:46:18 96e38c8865ba Download complete 11:46:18 dcc0c3b2850c Downloading [============================================> ] 67.58MB/76.12MB 11:46:18 110a13bd01fb Extracting [=======> ] 10.58MB/71.86MB 11:46:18 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 11:46:18 dcc0c3b2850c Verifying Checksum 11:46:18 dcc0c3b2850c Download complete 11:46:18 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 11:46:18 46eab5b44a35 Download complete 11:46:18 c124ba1a8b26 Downloading [=======> ] 13.52MB/91.87MB 11:46:18 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 11:46:18 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 11:46:18 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 11:46:18 110a13bd01fb Extracting [==========> ] 14.48MB/71.86MB 11:46:18 2d429b9e73a6 Downloading [=========> ] 5.602MB/29.13MB 11:46:18 c124ba1a8b26 Downloading [===============> ] 27.57MB/91.87MB 11:46:18 c4d302cc468d Downloading [==========================================> ] 3.882MB/4.534MB 11:46:18 c4d302cc468d Verifying Checksum 11:46:18 c4d302cc468d Download complete 11:46:18 96e38c8865ba Extracting [===> ] 5.571MB/71.91MB 11:46:18 96e38c8865ba Extracting [===> ] 5.571MB/71.91MB 11:46:18 110a13bd01fb Extracting [=============> ] 20.05MB/71.86MB 11:46:18 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 11:46:18 2d429b9e73a6 Downloading [=====================> ] 12.39MB/29.13MB 11:46:18 c124ba1a8b26 Downloading [========================> ] 44.33MB/91.87MB 11:46:19 96e38c8865ba Extracting [=======> ] 11.14MB/71.91MB 11:46:19 96e38c8865ba Extracting [=======> ] 11.14MB/71.91MB 11:46:19 01e0882c90d9 Verifying Checksum 11:46:19 01e0882c90d9 Download complete 11:46:19 110a13bd01fb Extracting [=================> ] 25.07MB/71.86MB 11:46:19 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 11:46:19 2d429b9e73a6 Downloading [==================================> ] 20.05MB/29.13MB 11:46:19 c124ba1a8b26 Downloading [=================================> ] 61.64MB/91.87MB 11:46:19 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 11:46:19 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 11:46:19 110a13bd01fb Extracting [====================> ] 28.97MB/71.86MB 11:46:19 2d429b9e73a6 Verifying Checksum 11:46:19 2d429b9e73a6 Download complete 11:46:19 531ee2cf3c0c Downloading [=============================> ] 4.75MB/8.066MB 11:46:19 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 11:46:19 c124ba1a8b26 Downloading [===========================================> ] 80.02MB/91.87MB 11:46:19 531ee2cf3c0c Verifying Checksum 11:46:19 531ee2cf3c0c Download complete 11:46:19 ed54a7dee1d8 Verifying Checksum 11:46:19 ed54a7dee1d8 Download complete 11:46:19 12c5c803443f Downloading [==================================================>] 116B/116B 11:46:19 12c5c803443f Verifying Checksum 11:46:19 12c5c803443f Download complete 11:46:19 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 11:46:19 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 11:46:19 e27c75a98748 Download complete 11:46:19 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 11:46:19 a83b68436f09 Download complete 11:46:19 787d6bee9571 Downloading [==================================================>] 127B/127B 11:46:19 787d6bee9571 Verifying Checksum 11:46:19 787d6bee9571 Download complete 11:46:19 96e38c8865ba Extracting [===============> ] 22.28MB/71.91MB 11:46:19 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 11:46:19 96e38c8865ba Extracting [===============> ] 22.28MB/71.91MB 11:46:19 110a13bd01fb Extracting [======================> ] 32.31MB/71.86MB 11:46:19 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 11:46:19 13ff0988aaea Download complete 11:46:19 c124ba1a8b26 Verifying Checksum 11:46:19 c124ba1a8b26 Download complete 11:46:19 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 11:46:19 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 11:46:19 4b82842ab819 Verifying Checksum 11:46:19 4b82842ab819 Download complete 11:46:19 7e568a0dc8fb Downloading [==================================================>] 184B/184B 11:46:19 7e568a0dc8fb Verifying Checksum 11:46:19 7e568a0dc8fb Download complete 11:46:19 e444bcd4d577 Downloading [==================================================>] 279B/279B 11:46:19 e444bcd4d577 Verifying Checksum 11:46:19 e444bcd4d577 Download complete 11:46:19 eca0188f477e Downloading [> ] 375.7kB/37.17MB 11:46:19 eabd8714fec9 Downloading [> ] 539.6kB/375MB 11:46:19 96e38c8865ba Extracting [==================> ] 27.3MB/71.91MB 11:46:19 96e38c8865ba Extracting [==================> ] 27.3MB/71.91MB 11:46:19 e73cb4a42719 Downloading [====> ] 9.731MB/109.1MB 11:46:19 110a13bd01fb Extracting [========================> ] 35.65MB/71.86MB 11:46:19 2d429b9e73a6 Extracting [=======> ] 4.129MB/29.13MB 11:46:19 eca0188f477e Downloading [========> ] 6.405MB/37.17MB 11:46:19 eabd8714fec9 Downloading [> ] 4.865MB/375MB 11:46:19 e73cb4a42719 Downloading [==========> ] 23.25MB/109.1MB 11:46:19 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 11:46:19 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 11:46:19 2d429b9e73a6 Extracting [==============> ] 8.552MB/29.13MB 11:46:19 110a13bd01fb Extracting [===========================> ] 40.11MB/71.86MB 11:46:19 eca0188f477e Downloading [================> ] 12.43MB/37.17MB 11:46:19 eabd8714fec9 Downloading [=> ] 10.27MB/375MB 11:46:19 e73cb4a42719 Downloading [=================> ] 37.85MB/109.1MB 11:46:19 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 11:46:19 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 11:46:19 2d429b9e73a6 Extracting [===================> ] 11.21MB/29.13MB 11:46:19 110a13bd01fb Extracting [==============================> ] 44.01MB/71.86MB 11:46:19 eca0188f477e Downloading [========================> ] 18.46MB/37.17MB 11:46:19 e73cb4a42719 Downloading [========================> ] 53.53MB/109.1MB 11:46:19 eabd8714fec9 Downloading [=> ] 14.6MB/375MB 11:46:19 2d429b9e73a6 Extracting [=========================> ] 15.04MB/29.13MB 11:46:19 96e38c8865ba Extracting [===========================> ] 40.11MB/71.91MB 11:46:19 96e38c8865ba Extracting [===========================> ] 40.11MB/71.91MB 11:46:19 110a13bd01fb Extracting [=================================> ] 48.46MB/71.86MB 11:46:19 eca0188f477e Downloading [==================================> ] 26MB/37.17MB 11:46:19 e73cb4a42719 Downloading [===============================> ] 69.75MB/109.1MB 11:46:19 eabd8714fec9 Downloading [==> ] 22.17MB/375MB 11:46:19 2d429b9e73a6 Extracting [================================> ] 18.87MB/29.13MB 11:46:19 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 11:46:19 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 11:46:19 110a13bd01fb Extracting [====================================> ] 52.36MB/71.86MB 11:46:19 eca0188f477e Downloading [================================================> ] 35.8MB/37.17MB 11:46:19 eca0188f477e Verifying Checksum 11:46:19 eca0188f477e Download complete 11:46:19 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 11:46:19 45fd2fec8a19 Verifying Checksum 11:46:19 45fd2fec8a19 Download complete 11:46:19 e73cb4a42719 Downloading [=======================================> ] 85.97MB/109.1MB 11:46:19 eabd8714fec9 Downloading [=====> ] 38.93MB/375MB 11:46:19 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 11:46:19 2d429b9e73a6 Extracting [======================================> ] 22.71MB/29.13MB 11:46:19 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 11:46:19 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 11:46:19 110a13bd01fb Extracting [======================================> ] 55.71MB/71.86MB 11:46:19 eca0188f477e Extracting [> ] 393.2kB/37.17MB 11:46:19 e73cb4a42719 Downloading [=============================================> ] 99.48MB/109.1MB 11:46:19 eabd8714fec9 Downloading [=======> ] 52.98MB/375MB 11:46:19 8f10199ed94b Downloading [===============> ] 2.653MB/8.768MB 11:46:19 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 11:46:19 110a13bd01fb Extracting [========================================> ] 58.49MB/71.86MB 11:46:19 e73cb4a42719 Verifying Checksum 11:46:19 e73cb4a42719 Download complete 11:46:19 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 11:46:19 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 11:46:20 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 11:46:20 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 11:46:20 f963a77d2726 Verifying Checksum 11:46:20 f963a77d2726 Download complete 11:46:20 eca0188f477e Extracting [=====> ] 4.325MB/37.17MB 11:46:20 eabd8714fec9 Downloading [=========> ] 68.12MB/375MB 11:46:20 8f10199ed94b Downloading [===========================> ] 4.816MB/8.768MB 11:46:20 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 11:46:20 110a13bd01fb Extracting [===========================================> ] 61.83MB/71.86MB 11:46:20 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 11:46:20 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 11:46:20 96e38c8865ba Extracting [=====================================> ] 54.59MB/71.91MB 11:46:20 eabd8714fec9 Downloading [==========> ] 82.18MB/375MB 11:46:20 eca0188f477e Extracting [==========> ] 7.471MB/37.17MB 11:46:20 8f10199ed94b Downloading [===========================================> ] 7.568MB/8.768MB 11:46:20 f3a82e9f1761 Downloading [====> ] 3.669MB/44.41MB 11:46:20 8f10199ed94b Verifying Checksum 11:46:20 8f10199ed94b Download complete 11:46:20 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 11:46:20 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 11:46:20 110a13bd01fb Extracting [==============================================> ] 66.29MB/71.86MB 11:46:20 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 11:46:20 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 11:46:20 79161a3f5362 Verifying Checksum 11:46:20 79161a3f5362 Download complete 11:46:20 eabd8714fec9 Downloading [============> ] 93.54MB/375MB 11:46:20 eca0188f477e Extracting [==============> ] 10.62MB/37.17MB 11:46:20 9c266ba63f51 Download complete 11:46:20 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 11:46:20 2e8a7df9c2ee Verifying Checksum 11:46:20 2e8a7df9c2ee Download complete 11:46:20 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 11:46:20 f3a82e9f1761 Downloading [=======> ] 6.88MB/44.41MB 11:46:20 10f05dd8b1db Downloading [==================================================>] 98B/98B 11:46:20 10f05dd8b1db Verifying Checksum 11:46:20 10f05dd8b1db Download complete 11:46:20 96e38c8865ba Extracting [==========================================> ] 61.83MB/71.91MB 11:46:20 96e38c8865ba Extracting [==========================================> ] 61.83MB/71.91MB 11:46:20 110a13bd01fb Extracting [================================================> ] 69.63MB/71.86MB 11:46:20 41dac8b43ba6 Downloading [==================================================>] 171B/171B 11:46:20 41dac8b43ba6 Verifying Checksum 11:46:20 41dac8b43ba6 Download complete 11:46:20 eabd8714fec9 Downloading [==============> ] 110.3MB/375MB 11:46:20 eca0188f477e Extracting [===================> ] 14.16MB/37.17MB 11:46:20 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 11:46:20 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 11:46:20 71a9f6a9ab4d Download complete 11:46:20 f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 11:46:20 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 11:46:20 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB 11:46:20 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 11:46:20 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 11:46:20 eabd8714fec9 Downloading [================> ] 126.5MB/375MB 11:46:20 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 11:46:20 2d429b9e73a6 Pull complete 11:46:20 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 11:46:20 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 11:46:20 eca0188f477e Extracting [======================> ] 16.52MB/37.17MB 11:46:20 f3a82e9f1761 Downloading [================> ] 14.22MB/44.41MB 11:46:20 96e38c8865ba Extracting [================================================> ] 69.63MB/71.91MB 11:46:20 96e38c8865ba Extracting [================================================> ] 69.63MB/71.91MB 11:46:20 eabd8714fec9 Downloading [==================> ] 141.7MB/375MB 11:46:20 110a13bd01fb Pull complete 11:46:20 12cf1ed9c784 Extracting [> ] 163.8kB/14.64MB 11:46:20 da3ed5db7103 Downloading [=> ] 2.702MB/127.4MB 11:46:20 eca0188f477e Extracting [===========================> ] 20.45MB/37.17MB 11:46:20 f3a82e9f1761 Downloading [======================> ] 20.18MB/44.41MB 11:46:20 46eab5b44a35 Pull complete 11:46:20 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 11:46:20 eabd8714fec9 Downloading [====================> ] 154.6MB/375MB 11:46:20 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 11:46:20 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 11:46:20 12cf1ed9c784 Extracting [=> ] 491.5kB/14.64MB 11:46:20 eca0188f477e Extracting [=================================> ] 24.77MB/37.17MB 11:46:20 da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB 11:46:20 f3a82e9f1761 Downloading [===============================> ] 27.98MB/44.41MB 11:46:20 c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 11:46:20 eabd8714fec9 Downloading [======================> ] 169.2MB/375MB 11:46:20 12cf1ed9c784 Extracting [===========> ] 3.277MB/14.64MB 11:46:20 eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB 11:46:20 da3ed5db7103 Downloading [===> ] 8.65MB/127.4MB 11:46:20 f3a82e9f1761 Downloading [========================================> ] 35.78MB/44.41MB 11:46:20 c4d302cc468d Extracting [=======================================> ] 3.604MB/4.534MB 11:46:20 eabd8714fec9 Downloading [========================> ] 182.7MB/375MB 11:46:20 12cf1ed9c784 Extracting [==================> ] 5.407MB/14.64MB 11:46:20 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 11:46:20 da3ed5db7103 Downloading [====> ] 12.43MB/127.4MB 11:46:20 96e38c8865ba Pull complete 11:46:20 96e38c8865ba Pull complete 11:46:20 f3a82e9f1761 Downloading [================================================> ] 43.12MB/44.41MB 11:46:20 eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 11:46:20 f3a82e9f1761 Verifying Checksum 11:46:20 f3a82e9f1761 Download complete 11:46:20 eabd8714fec9 Downloading [==========================> ] 197.3MB/375MB 11:46:20 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 11:46:20 c955f6e31a04 Download complete 11:46:20 12cf1ed9c784 Extracting [=======================> ] 6.881MB/14.64MB 11:46:21 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 11:46:21 da3ed5db7103 Downloading [=========> ] 23.25MB/127.4MB 11:46:21 e5d7009d9e55 Extracting [==================================================>] 295B/295B 11:46:21 e5d7009d9e55 Extracting [==================================================>] 295B/295B 11:46:21 5e06c6bed798 Extracting [==================================================>] 296B/296B 11:46:21 5e06c6bed798 Extracting [==================================================>] 296B/296B 11:46:21 eca0188f477e Extracting [================================================> ] 35.78MB/37.17MB 11:46:21 eabd8714fec9 Downloading [============================> ] 210.3MB/375MB 11:46:21 f18232174bc9 Verifying Checksum 11:46:21 f18232174bc9 Download complete 11:46:21 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 11:46:21 da3ed5db7103 Downloading [============> ] 30.82MB/127.4MB 11:46:21 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 11:46:21 e60d9caeb0b8 Verifying Checksum 11:46:21 e60d9caeb0b8 Download complete 11:46:21 12cf1ed9c784 Extracting [============================> ] 8.356MB/14.64MB 11:46:21 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 11:46:21 f61a19743345 Downloading [> ] 48.06kB/3.524MB 11:46:21 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 11:46:21 eabd8714fec9 Downloading [=============================> ] 224.9MB/375MB 11:46:21 f18232174bc9 Extracting [==========> ] 786.4kB/3.642MB 11:46:21 da3ed5db7103 Downloading [==================> ] 45.96MB/127.4MB 11:46:21 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 11:46:21 f61a19743345 Verifying Checksum 11:46:21 f61a19743345 Download complete 11:46:21 12cf1ed9c784 Extracting [=====================================> ] 10.98MB/14.64MB 11:46:21 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 11:46:21 eabd8714fec9 Downloading [================================> ] 241.7MB/375MB 11:46:21 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 11:46:21 c4d302cc468d Pull complete 11:46:21 da3ed5db7103 Downloading [=======================> ] 60.55MB/127.4MB 11:46:21 eabd8714fec9 Downloading [==================================> ] 256.8MB/375MB 11:46:21 8af57d8c9f49 Downloading [============================> ] 4.914MB/8.735MB 11:46:21 12cf1ed9c784 Extracting [========================================> ] 11.8MB/14.64MB 11:46:21 da3ed5db7103 Downloading [=============================> ] 74.61MB/127.4MB 11:46:21 8af57d8c9f49 Verifying Checksum 11:46:21 8af57d8c9f49 Download complete 11:46:21 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 11:46:21 e5d7009d9e55 Pull complete 11:46:21 5e06c6bed798 Pull complete 11:46:21 eca0188f477e Pull complete 11:46:21 f18232174bc9 Pull complete 11:46:21 e444bcd4d577 Extracting [==================================================>] 279B/279B 11:46:21 e444bcd4d577 Extracting [==================================================>] 279B/279B 11:46:21 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 11:46:21 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 11:46:21 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 11:46:21 c53a11b7c6fc Downloading [==================================================>] 58.08kB/58.08kB 11:46:21 c53a11b7c6fc Verifying Checksum 11:46:21 c53a11b7c6fc Download complete 11:46:21 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 11:46:21 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 11:46:21 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 11:46:21 e032d0a5e409 Download complete 11:46:21 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 11:46:21 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 11:46:21 eabd8714fec9 Downloading [====================================> ] 270.9MB/375MB 11:46:21 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 11:46:21 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 11:46:21 12cf1ed9c784 Extracting [==================================================>] 14.64MB/14.64MB 11:46:21 da3ed5db7103 Downloading [================================> ] 83.26MB/127.4MB 11:46:21 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 11:46:21 01e0882c90d9 Extracting [==============> ] 426kB/1.447MB 11:46:21 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 11:46:21 eabd8714fec9 Downloading [=====================================> ] 281.7MB/375MB 11:46:21 12cf1ed9c784 Pull complete 11:46:21 d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 11:46:21 d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 11:46:21 e444bcd4d577 Pull complete 11:46:21 01e0882c90d9 Pull complete 11:46:21 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 11:46:21 da3ed5db7103 Downloading [=====================================> ] 95.7MB/127.4MB 11:46:21 c49e0ee60bfb Downloading [===> ] 8.109MB/107.3MB 11:46:21 684be6598fc9 Pull complete 11:46:21 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 11:46:21 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 11:46:21 e60d9caeb0b8 Pull complete 11:46:21 f61a19743345 Extracting [> ] 65.54kB/3.524MB 11:46:21 1ec5fb03eaee Pull complete 11:46:21 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 11:46:21 eabd8714fec9 Downloading [=======================================> ] 293.6MB/375MB 11:46:21 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 11:46:21 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 11:46:21 d4108afce2f7 Pull complete 11:46:21 da3ed5db7103 Downloading [===========================================> ] 110.3MB/127.4MB 11:46:21 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 11:46:21 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 11:46:21 c49e0ee60bfb Downloading [=========> ] 19.46MB/107.3MB 11:46:21 f61a19743345 Extracting [======> ] 458.8kB/3.524MB 11:46:21 0d92cad902ba Pull complete 11:46:21 eabd8714fec9 Downloading [=========================================> ] 310.9MB/375MB 11:46:21 531ee2cf3c0c Extracting [=====================> ] 3.441MB/8.066MB 11:46:21 da3ed5db7103 Downloading [=================================================> ] 126MB/127.4MB 11:46:21 d3165a332ae3 Pull complete 11:46:21 da3ed5db7103 Verifying Checksum 11:46:21 da3ed5db7103 Download complete 11:46:21 c49e0ee60bfb Downloading [===============> ] 32.44MB/107.3MB 11:46:21 f61a19743345 Extracting [==========================> ] 1.835MB/3.524MB 11:46:21 eabd8714fec9 Downloading [===========================================> ] 325.5MB/375MB 11:46:21 07255172bfd8 Pull complete 11:46:21 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 11:46:21 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 11:46:21 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 11:46:21 384497dbce3b Downloading [> ] 539.6kB/63.48MB 11:46:21 531ee2cf3c0c Extracting [================================> ] 5.308MB/8.066MB 11:46:21 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 11:46:21 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 11:46:21 c49e0ee60bfb Downloading [=====================> ] 45.96MB/107.3MB 11:46:21 eabd8714fec9 Downloading [=============================================> ] 339MB/375MB 11:46:22 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 11:46:22 dcc0c3b2850c Extracting [======> ] 9.47MB/76.12MB 11:46:22 384497dbce3b Downloading [==> ] 3.243MB/63.48MB 11:46:22 531ee2cf3c0c Extracting [===========================================> ] 7.078MB/8.066MB 11:46:22 f61a19743345 Pull complete 11:46:22 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 11:46:22 c49e0ee60bfb Downloading [===========================> ] 59.47MB/107.3MB 11:46:22 22c948928e79 Pull complete 11:46:22 e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 11:46:22 e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 11:46:22 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 11:46:22 eabd8714fec9 Downloading [===============================================> ] 353.1MB/375MB 11:46:22 c124ba1a8b26 Extracting [===> ] 6.128MB/91.87MB 11:46:22 dcc0c3b2850c Extracting [==========> ] 16.71MB/76.12MB 11:46:22 384497dbce3b Downloading [=====> ] 7.568MB/63.48MB 11:46:22 531ee2cf3c0c Pull complete 11:46:22 8af57d8c9f49 Extracting [===> ] 589.8kB/8.735MB 11:46:22 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 11:46:22 c49e0ee60bfb Downloading [==================================> ] 72.99MB/107.3MB 11:46:22 e92d65bf8445 Pull complete 11:46:22 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 11:46:22 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 11:46:22 eabd8714fec9 Downloading [=================================================> ] 367.7MB/375MB 11:46:22 dcc0c3b2850c Extracting [===============> ] 23.4MB/76.12MB 11:46:22 c124ba1a8b26 Extracting [=======> ] 13.93MB/91.87MB 11:46:22 384497dbce3b Downloading [===========> ] 14.06MB/63.48MB 11:46:22 eabd8714fec9 Verifying Checksum 11:46:22 eabd8714fec9 Download complete 11:46:22 8af57d8c9f49 Extracting [==============> ] 2.556MB/8.735MB 11:46:22 c49e0ee60bfb Downloading [========================================> ] 86.51MB/107.3MB 11:46:22 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 11:46:22 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 11:46:22 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 11:46:22 dcc0c3b2850c Extracting [===================> ] 30.08MB/76.12MB 11:46:22 c124ba1a8b26 Extracting [===========> ] 21.73MB/91.87MB 11:46:22 384497dbce3b Downloading [===========> ] 15.14MB/63.48MB 11:46:22 ed54a7dee1d8 Pull complete 11:46:22 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 11:46:22 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 11:46:22 055b9255fa03 Verifying Checksum 11:46:22 055b9255fa03 Download complete 11:46:22 12c5c803443f Extracting [==================================================>] 116B/116B 11:46:22 c49e0ee60bfb Downloading [===========================================> ] 92.45MB/107.3MB 11:46:22 8af57d8c9f49 Extracting [================================> ] 5.702MB/8.735MB 11:46:22 12c5c803443f Extracting [==================================================>] 116B/116B 11:46:22 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 11:46:22 b176d7edde70 Verifying Checksum 11:46:22 b176d7edde70 Download complete 11:46:22 eabd8714fec9 Extracting [> ] 557.1kB/375MB 11:46:22 7910fddefabc Pull complete 11:46:22 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 11:46:22 policy-db-migrator Pulled 11:46:22 dcc0c3b2850c Extracting [========================> ] 36.77MB/76.12MB 11:46:22 c124ba1a8b26 Extracting [================> ] 30.08MB/91.87MB 11:46:22 384497dbce3b Downloading [====================> ] 25.95MB/63.48MB 11:46:22 8af57d8c9f49 Extracting [===========================================> ] 7.569MB/8.735MB 11:46:22 c49e0ee60bfb Downloading [================================================> ] 104.9MB/107.3MB 11:46:22 c49e0ee60bfb Verifying Checksum 11:46:22 c49e0ee60bfb Download complete 11:46:22 12c5c803443f Pull complete 11:46:22 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 11:46:22 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 11:46:22 eabd8714fec9 Extracting [=> ] 7.799MB/375MB 11:46:22 1e017ebebdbd Downloading [========> ] 6.405MB/37.19MB 11:46:22 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 11:46:22 dcc0c3b2850c Extracting [===============================> ] 47.91MB/76.12MB 11:46:22 c124ba1a8b26 Extracting [======================> ] 41.78MB/91.87MB 11:46:22 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 11:46:22 384497dbce3b Downloading [===============================> ] 39.47MB/63.48MB 11:46:22 8af57d8c9f49 Pull complete 11:46:22 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 11:46:22 1e017ebebdbd Downloading [=======================> ] 17.33MB/37.19MB 11:46:22 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 11:46:22 eabd8714fec9 Extracting [==> ] 15.04MB/375MB 11:46:22 55f2b468da67 Downloading [=> ] 5.946MB/257.9MB 11:46:22 dcc0c3b2850c Extracting [====================================> ] 56.26MB/76.12MB 11:46:22 e27c75a98748 Pull complete 11:46:22 384497dbce3b Downloading [========================================> ] 51.9MB/63.48MB 11:46:22 c124ba1a8b26 Extracting [===========================> ] 50.14MB/91.87MB 11:46:22 1e017ebebdbd Downloading [========================================> ] 29.77MB/37.19MB 11:46:22 55f2b468da67 Downloading [==> ] 14.6MB/257.9MB 11:46:22 eabd8714fec9 Extracting [==> ] 21.17MB/375MB 11:46:22 dcc0c3b2850c Extracting [=========================================> ] 63.5MB/76.12MB 11:46:22 384497dbce3b Verifying Checksum 11:46:22 384497dbce3b Download complete 11:46:22 c53a11b7c6fc Pull complete 11:46:22 c124ba1a8b26 Extracting [===============================> ] 58.49MB/91.87MB 11:46:22 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 11:46:22 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 11:46:22 82bfc142787e Downloading [> ] 97.22kB/8.613MB 11:46:22 1e017ebebdbd Verifying Checksum 11:46:22 1e017ebebdbd Download complete 11:46:22 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 11:46:22 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 11:46:22 46baca71a4ef Verifying Checksum 11:46:22 46baca71a4ef Download complete 11:46:22 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 11:46:22 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 11:46:22 55f2b468da67 Downloading [=====> ] 27.57MB/257.9MB 11:46:22 dcc0c3b2850c Extracting [=============================================> ] 68.52MB/76.12MB 11:46:22 c124ba1a8b26 Extracting [===================================> ] 65.73MB/91.87MB 11:46:22 82bfc142787e Downloading [=====================> ] 3.636MB/8.613MB 11:46:22 b0e0ef7895f4 Downloading [=======> ] 5.275MB/37.01MB 11:46:22 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 11:46:22 e73cb4a42719 Extracting [=> ] 3.342MB/109.1MB 11:46:22 dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB 11:46:22 55f2b468da67 Downloading [=======> ] 41.09MB/257.9MB 11:46:22 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 11:46:22 c124ba1a8b26 Extracting [========================================> ] 74.65MB/91.87MB 11:46:22 e032d0a5e409 Pull complete 11:46:22 82bfc142787e Downloading [==============================================> ] 7.962MB/8.613MB 11:46:22 82bfc142787e Verifying Checksum 11:46:22 82bfc142787e Download complete 11:46:22 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 11:46:23 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 11:46:23 c0c90eeb8aca Download complete 11:46:23 b0e0ef7895f4 Downloading [=============> ] 9.797MB/37.01MB 11:46:23 eabd8714fec9 Extracting [====> ] 30.64MB/375MB 11:46:23 e73cb4a42719 Extracting [==> ] 6.128MB/109.1MB 11:46:23 5cfb27c10ea5 Download complete 11:46:23 55f2b468da67 Downloading [==========> ] 54.61MB/257.9MB 11:46:23 dcc0c3b2850c Pull complete 11:46:23 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 11:46:23 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 11:46:23 40a5eed61bb0 Downloading [==================================================>] 98B/98B 11:46:23 40a5eed61bb0 Verifying Checksum 11:46:23 40a5eed61bb0 Download complete 11:46:23 c124ba1a8b26 Extracting [============================================> ] 82.44MB/91.87MB 11:46:23 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 11:46:23 e040ea11fa10 Downloading [==================================================>] 173B/173B 11:46:23 e040ea11fa10 Verifying Checksum 11:46:23 e040ea11fa10 Download complete 11:46:23 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 11:46:23 b0e0ef7895f4 Downloading [====================> ] 15.07MB/37.01MB 11:46:23 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 11:46:23 eabd8714fec9 Extracting [====> ] 36.21MB/375MB 11:46:23 55f2b468da67 Downloading [=============> ] 70.29MB/257.9MB 11:46:23 e73cb4a42719 Extracting [====> ] 8.913MB/109.1MB 11:46:23 c124ba1a8b26 Extracting [===============================================> ] 87.46MB/91.87MB 11:46:23 c49e0ee60bfb Extracting [=> ] 2.785MB/107.3MB 11:46:23 eb7cda286a15 Pull complete 11:46:23 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 11:46:23 b0e0ef7895f4 Downloading [=========================> ] 19.22MB/37.01MB 11:46:23 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 11:46:23 eabd8714fec9 Extracting [======> ] 45.12MB/375MB 11:46:23 55f2b468da67 Downloading [===============> ] 78.94MB/257.9MB 11:46:23 09d5a3f70313 Downloading [=> ] 2.702MB/109.2MB 11:46:23 api Pulled 11:46:23 c124ba1a8b26 Pull complete 11:46:23 e73cb4a42719 Extracting [=====> ] 12.26MB/109.1MB 11:46:23 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 11:46:23 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 11:46:23 c49e0ee60bfb Extracting [==> ] 5.014MB/107.3MB 11:46:23 b0e0ef7895f4 Downloading [=================================> ] 24.49MB/37.01MB 11:46:23 eabd8714fec9 Extracting [======> ] 50.69MB/375MB 11:46:23 55f2b468da67 Downloading [=================> ] 92.45MB/257.9MB 11:46:23 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 11:46:23 e73cb4a42719 Extracting [=======> ] 16.15MB/109.1MB 11:46:23 09d5a3f70313 Downloading [==> ] 4.865MB/109.2MB 11:46:23 c49e0ee60bfb Extracting [====> ] 8.913MB/107.3MB 11:46:23 b0e0ef7895f4 Downloading [=======================================> ] 29.01MB/37.01MB 11:46:23 6394804c2196 Pull complete 11:46:23 eabd8714fec9 Extracting [=======> ] 58.49MB/375MB 11:46:23 55f2b468da67 Downloading [===================> ] 102.7MB/257.9MB 11:46:23 1e017ebebdbd Extracting [================> ] 12.19MB/37.19MB 11:46:23 09d5a3f70313 Downloading [==> ] 6.487MB/109.2MB 11:46:23 e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB 11:46:23 pap Pulled 11:46:23 c49e0ee60bfb Extracting [====> ] 10.58MB/107.3MB 11:46:23 b0e0ef7895f4 Downloading [=============================================> ] 33.91MB/37.01MB 11:46:23 eabd8714fec9 Extracting [========> ] 64.62MB/375MB 11:46:23 55f2b468da67 Downloading [=====================> ] 111.4MB/257.9MB 11:46:23 1e017ebebdbd Extracting [===================> ] 14.55MB/37.19MB 11:46:23 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 11:46:23 e73cb4a42719 Extracting [==========> ] 23.4MB/109.1MB 11:46:23 b0e0ef7895f4 Verifying Checksum 11:46:23 b0e0ef7895f4 Download complete 11:46:23 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 11:46:23 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 11:46:23 356f5c2c843b Download complete 11:46:23 9fa9226be034 Downloading [> ] 15.3kB/783kB 11:46:23 9fa9226be034 Download complete 11:46:23 9fa9226be034 Extracting [==> ] 32.77kB/783kB 11:46:23 eabd8714fec9 Extracting [=========> ] 72.97MB/375MB 11:46:23 55f2b468da67 Downloading [========================> ] 127.6MB/257.9MB 11:46:23 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 11:46:23 c49e0ee60bfb Extracting [======> ] 14.48MB/107.3MB 11:46:23 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 11:46:23 1617e25568b2 Verifying Checksum 11:46:23 1617e25568b2 Download complete 11:46:23 09d5a3f70313 Downloading [========> ] 19.46MB/109.2MB 11:46:23 1e017ebebdbd Extracting [=======================> ] 17.69MB/37.19MB 11:46:23 e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 11:46:23 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 11:46:23 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 11:46:23 eabd8714fec9 Extracting [==========> ] 82.44MB/375MB 11:46:23 9fa9226be034 Extracting [==================================================>] 783kB/783kB 11:46:23 9fa9226be034 Extracting [==================================================>] 783kB/783kB 11:46:23 55f2b468da67 Downloading [===========================> ] 140.6MB/257.9MB 11:46:23 09d5a3f70313 Downloading [==============> ] 31.9MB/109.2MB 11:46:23 1e017ebebdbd Extracting [===========================> ] 20.45MB/37.19MB 11:46:23 c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB 11:46:23 e73cb4a42719 Extracting [==============> ] 30.64MB/109.1MB 11:46:23 6ac0e4adf315 Downloading [===> ] 3.784MB/62.07MB 11:46:23 9fa9226be034 Pull complete 11:46:23 eabd8714fec9 Extracting [============> ] 90.24MB/375MB 11:46:23 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 11:46:23 55f2b468da67 Downloading [=============================> ] 153.5MB/257.9MB 11:46:23 09d5a3f70313 Downloading [=====================> ] 46.5MB/109.2MB 11:46:23 1e017ebebdbd Extracting [================================> ] 24.38MB/37.19MB 11:46:23 e73cb4a42719 Extracting [================> ] 35.65MB/109.1MB 11:46:23 6ac0e4adf315 Downloading [=====> ] 6.487MB/62.07MB 11:46:23 eabd8714fec9 Extracting [=============> ] 98.04MB/375MB 11:46:23 55f2b468da67 Downloading [================================> ] 168.1MB/257.9MB 11:46:23 09d5a3f70313 Downloading [===========================> ] 61.09MB/109.2MB 11:46:23 c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 11:46:24 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 11:46:24 1e017ebebdbd Extracting [======================================> ] 28.31MB/37.19MB 11:46:24 e73cb4a42719 Extracting [==================> ] 40.67MB/109.1MB 11:46:24 6ac0e4adf315 Downloading [==========> ] 12.98MB/62.07MB 11:46:24 eabd8714fec9 Extracting [=============> ] 103.6MB/375MB 11:46:24 55f2b468da67 Downloading [===================================> ] 181.1MB/257.9MB 11:46:24 c49e0ee60bfb Extracting [=========> ] 19.5MB/107.3MB 11:46:24 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 11:46:24 09d5a3f70313 Downloading [===================================> ] 77.86MB/109.2MB 11:46:24 1e017ebebdbd Extracting [=========================================> ] 30.67MB/37.19MB 11:46:24 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 11:46:24 6ac0e4adf315 Downloading [====================> ] 25.41MB/62.07MB 11:46:24 e73cb4a42719 Extracting [====================> ] 45.68MB/109.1MB 11:46:24 55f2b468da67 Downloading [=====================================> ] 193MB/257.9MB 11:46:24 eabd8714fec9 Extracting [==============> ] 107MB/375MB 11:46:24 09d5a3f70313 Downloading [=========================================> ] 90.29MB/109.2MB 11:46:24 c49e0ee60bfb Extracting [==========> ] 23.4MB/107.3MB 11:46:24 6ac0e4adf315 Downloading [=============================> ] 36.76MB/62.07MB 11:46:24 1617e25568b2 Pull complete 11:46:24 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 11:46:24 e73cb4a42719 Extracting [======================> ] 49.02MB/109.1MB 11:46:24 55f2b468da67 Downloading [========================================> ] 206.5MB/257.9MB 11:46:24 eabd8714fec9 Extracting [==============> ] 109.7MB/375MB 11:46:24 09d5a3f70313 Downloading [===============================================> ] 102.7MB/109.2MB 11:46:24 c49e0ee60bfb Extracting [============> ] 27.85MB/107.3MB 11:46:24 6ac0e4adf315 Downloading [======================================> ] 48.12MB/62.07MB 11:46:24 09d5a3f70313 Verifying Checksum 11:46:24 09d5a3f70313 Download complete 11:46:24 e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 11:46:24 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 11:46:24 55f2b468da67 Downloading [==========================================> ] 220.6MB/257.9MB 11:46:24 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 11:46:24 eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 11:46:24 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 11:46:24 c49e0ee60bfb Extracting [===============> ] 33.98MB/107.3MB 11:46:24 6ac0e4adf315 Verifying Checksum 11:46:24 6ac0e4adf315 Download complete 11:46:24 408012a7b118 Downloading [==================================================>] 637B/637B 11:46:24 408012a7b118 Verifying Checksum 11:46:24 408012a7b118 Download complete 11:46:24 55f2b468da67 Downloading [============================================> ] 228.2MB/257.9MB 11:46:24 f3b09c502777 Downloading [====> ] 4.865MB/56.52MB 11:46:24 eabd8714fec9 Extracting [===============> ] 114.2MB/375MB 11:46:24 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 11:46:24 44986281b8b9 Download complete 11:46:24 e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB 11:46:24 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 11:46:24 bf70c5107ab5 Verifying Checksum 11:46:24 bf70c5107ab5 Download complete 11:46:24 1e017ebebdbd Pull complete 11:46:24 c49e0ee60bfb Extracting [================> ] 35.65MB/107.3MB 11:46:24 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 11:46:24 1ccde423731d Download complete 11:46:24 7221d93db8a9 Downloading [==================================================>] 100B/100B 11:46:24 7221d93db8a9 Download complete 11:46:24 55f2b468da67 Downloading [==============================================> ] 237.9MB/257.9MB 11:46:24 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 11:46:24 f3b09c502777 Downloading [==========> ] 11.89MB/56.52MB 11:46:24 7df673c7455d Downloading [==================================================>] 694B/694B 11:46:24 7df673c7455d Verifying Checksum 11:46:24 7df673c7455d Download complete 11:46:24 eabd8714fec9 Extracting [===============> ] 116.4MB/375MB 11:46:24 e73cb4a42719 Extracting [=========================> ] 54.59MB/109.1MB 11:46:24 f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 11:46:24 c49e0ee60bfb Extracting [==================> ] 38.99MB/107.3MB 11:46:24 55f2b468da67 Downloading [================================================> ] 252.5MB/257.9MB 11:46:24 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB 11:46:24 f3b09c502777 Downloading [====================> ] 23.25MB/56.52MB 11:46:24 e73cb4a42719 Extracting [==========================> ] 57.38MB/109.1MB 11:46:24 eabd8714fec9 Extracting [================> ] 120.3MB/375MB 11:46:24 55f2b468da67 Download complete 11:46:24 f90c8eb4724c Downloading [========> ] 5.291MB/30.59MB 11:46:24 c49e0ee60bfb Extracting [===================> ] 42.34MB/107.3MB 11:46:24 f3b09c502777 Downloading [==========================> ] 29.74MB/56.52MB 11:46:24 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB 11:46:24 e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 11:46:24 eabd8714fec9 Extracting [================> ] 125.3MB/375MB 11:46:24 f90c8eb4724c Downloading [========================> ] 14.94MB/30.59MB 11:46:24 c49e0ee60bfb Extracting [=====================> ] 45.12MB/107.3MB 11:46:24 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 11:46:24 f3b09c502777 Downloading [=======================================> ] 44.87MB/56.52MB 11:46:24 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 11:46:24 e73cb4a42719 Extracting [=============================> ] 63.5MB/109.1MB 11:46:24 eabd8714fec9 Extracting [=================> ] 129.2MB/375MB 11:46:24 f90c8eb4724c Downloading [===========================================> ] 26.46MB/30.59MB 11:46:25 c49e0ee60bfb Extracting [======================> ] 48.46MB/107.3MB 11:46:25 55f2b468da67 Extracting [==> ] 11.7MB/257.9MB 11:46:25 f3b09c502777 Verifying Checksum 11:46:25 f3b09c502777 Download complete 11:46:25 f90c8eb4724c Verifying Checksum 11:46:25 f90c8eb4724c Download complete 11:46:25 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB 11:46:25 e73cb4a42719 Extracting [==============================> ] 67.4MB/109.1MB 11:46:25 eabd8714fec9 Extracting [=================> ] 131.5MB/375MB 11:46:25 c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB 11:46:25 55f2b468da67 Extracting [===> ] 19.5MB/257.9MB 11:46:25 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 11:46:25 f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 11:46:25 e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 11:46:25 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 11:46:25 eabd8714fec9 Extracting [=================> ] 134.3MB/375MB 11:46:25 c49e0ee60bfb Extracting [=========================> ] 54.03MB/107.3MB 11:46:25 2b1b549e99de Verifying Checksum 11:46:25 2b1b549e99de Download complete 11:46:25 55f2b468da67 Extracting [====> ] 21.73MB/257.9MB 11:46:25 6ac0e4adf315 Extracting [=============> ] 17.27MB/62.07MB 11:46:25 f90c8eb4724c Extracting [=====> ] 3.277MB/30.59MB 11:46:25 eabd8714fec9 Extracting [==================> ] 137MB/375MB 11:46:25 e73cb4a42719 Extracting [==================================> ] 75.2MB/109.1MB 11:46:25 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB 11:46:25 c49e0ee60bfb Extracting [==========================> ] 57.38MB/107.3MB 11:46:25 f90c8eb4724c Extracting [==========> ] 6.226MB/30.59MB 11:46:25 eabd8714fec9 Extracting [==================> ] 139.8MB/375MB 11:46:25 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 11:46:25 e73cb4a42719 Extracting [====================================> ] 79.1MB/109.1MB 11:46:25 547372ea8ffa Downloading [> ] 130kB/12.63MB 11:46:25 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 11:46:25 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 11:46:25 c49e0ee60bfb Extracting [============================> ] 60.72MB/107.3MB 11:46:25 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 11:46:25 f90c8eb4724c Extracting [=============> ] 8.52MB/30.59MB 11:46:25 e73cb4a42719 Extracting [=====================================> ] 82.44MB/109.1MB 11:46:25 eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 11:46:25 547372ea8ffa Downloading [===================> ] 4.98MB/12.63MB 11:46:25 90dd78f85976 Downloading [======> ] 5.537MB/41.49MB 11:46:25 65d25c0f02f3 Downloading [============> ] 7.372MB/28.98MB 11:46:25 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB 11:46:25 55f2b468da67 Extracting [======> ] 31.75MB/257.9MB 11:46:25 c49e0ee60bfb Extracting [=============================> ] 62.95MB/107.3MB 11:46:25 547372ea8ffa Downloading [===============================================> ] 11.93MB/12.63MB 11:46:25 eabd8714fec9 Extracting [===================> ] 144.8MB/375MB 11:46:25 f90c8eb4724c Extracting [==================> ] 11.47MB/30.59MB 11:46:25 e73cb4a42719 Extracting [=======================================> ] 85.79MB/109.1MB 11:46:25 547372ea8ffa Verifying Checksum 11:46:25 547372ea8ffa Download complete 11:46:25 90dd78f85976 Downloading [================> ] 14.06MB/41.49MB 11:46:25 65d25c0f02f3 Downloading [==============================> ] 17.69MB/28.98MB 11:46:25 4f4fb700ef54 Downloading [==================================================>] 32B/32B 11:46:25 4f4fb700ef54 Verifying Checksum 11:46:25 4f4fb700ef54 Download complete 11:46:25 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB 11:46:25 55f2b468da67 Extracting [=======> ] 39.55MB/257.9MB 11:46:25 c49e0ee60bfb Extracting [==============================> ] 65.73MB/107.3MB 11:46:25 f90c8eb4724c Extracting [=======================> ] 14.42MB/30.59MB 11:46:25 90dd78f85976 Downloading [===========================> ] 23MB/41.49MB 11:46:25 65d25c0f02f3 Downloading [===============================================> ] 27.43MB/28.98MB 11:46:25 eabd8714fec9 Extracting [===================> ] 147.1MB/375MB 11:46:25 e73cb4a42719 Extracting [=========================================> ] 89.69MB/109.1MB 11:46:25 65d25c0f02f3 Verifying Checksum 11:46:25 65d25c0f02f3 Download complete 11:46:25 55f2b468da67 Extracting [=========> ] 47.91MB/257.9MB 11:46:25 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 11:46:25 c49e0ee60bfb Extracting [===============================> ] 68.52MB/107.3MB 11:46:25 f90c8eb4724c Extracting [============================> ] 17.69MB/30.59MB 11:46:25 90dd78f85976 Downloading [=========================================> ] 34.08MB/41.49MB 11:46:25 eabd8714fec9 Extracting [===================> ] 149.8MB/375MB 11:46:25 e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB 11:46:25 55f2b468da67 Extracting [==========> ] 55.71MB/257.9MB 11:46:25 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB 11:46:25 90dd78f85976 Verifying Checksum 11:46:25 90dd78f85976 Download complete 11:46:25 c49e0ee60bfb Extracting [=================================> ] 71.86MB/107.3MB 11:46:25 f90c8eb4724c Extracting [==================================> ] 21.3MB/30.59MB 11:46:25 eabd8714fec9 Extracting [====================> ] 151.5MB/375MB 11:46:25 55f2b468da67 Extracting [============> ] 64.62MB/257.9MB 11:46:25 e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 11:46:26 6ac0e4adf315 Extracting [==================================> ] 42.34MB/62.07MB 11:46:26 c49e0ee60bfb Extracting [===================================> ] 75.2MB/107.3MB 11:46:26 f90c8eb4724c Extracting [========================================> ] 24.58MB/30.59MB 11:46:26 eabd8714fec9 Extracting [====================> ] 153.2MB/375MB 11:46:26 55f2b468da67 Extracting [=============> ] 69.63MB/257.9MB 11:46:26 e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 11:46:26 6ac0e4adf315 Extracting [===========================================> ] 54.03MB/62.07MB 11:46:26 c49e0ee60bfb Extracting [====================================> ] 77.99MB/107.3MB 11:46:26 f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 11:46:26 55f2b468da67 Extracting [==============> ] 76.32MB/257.9MB 11:46:26 eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 11:46:26 6ac0e4adf315 Extracting [================================================> ] 60.16MB/62.07MB 11:46:26 e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 11:46:26 c49e0ee60bfb Extracting [=====================================> ] 80.77MB/107.3MB 11:46:26 55f2b468da67 Extracting [================> ] 83.56MB/257.9MB 11:46:26 f90c8eb4724c Extracting [===============================================> ] 28.84MB/30.59MB 11:46:26 eabd8714fec9 Extracting [=====================> ] 159.3MB/375MB 11:46:26 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 11:46:26 e73cb4a42719 Extracting [==============================================> ] 101.9MB/109.1MB 11:46:26 c49e0ee60bfb Extracting [======================================> ] 83.56MB/107.3MB 11:46:26 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 11:46:26 55f2b468da67 Extracting [=================> ] 89.13MB/257.9MB 11:46:26 f90c8eb4724c Extracting [=================================================> ] 30.15MB/30.59MB 11:46:26 eabd8714fec9 Extracting [=====================> ] 161.5MB/375MB 11:46:26 e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 11:46:26 f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 11:46:26 c49e0ee60bfb Extracting [=========================================> ] 88.57MB/107.3MB 11:46:26 55f2b468da67 Extracting [==================> ] 94.7MB/257.9MB 11:46:26 eabd8714fec9 Extracting [======================> ] 166MB/375MB 11:46:26 55f2b468da67 Extracting [===================> ] 98.6MB/257.9MB 11:46:26 c49e0ee60bfb Extracting [==========================================> ] 90.8MB/107.3MB 11:46:26 eabd8714fec9 Extracting [======================> ] 166.6MB/375MB 11:46:26 e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 11:46:26 c49e0ee60bfb Extracting [===========================================> ] 93.03MB/107.3MB 11:46:26 e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 11:46:26 eabd8714fec9 Extracting [======================> ] 168.2MB/375MB 11:46:26 55f2b468da67 Extracting [===================> ] 101.4MB/257.9MB 11:46:26 c49e0ee60bfb Extracting [=============================================> ] 97.48MB/107.3MB 11:46:26 eabd8714fec9 Extracting [========================> ] 182.7MB/375MB 11:46:27 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 11:46:27 c49e0ee60bfb Extracting [===============================================> ] 101.9MB/107.3MB 11:46:27 eabd8714fec9 Extracting [=========================> ] 195MB/375MB 11:46:27 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 11:46:27 55f2b468da67 Extracting [=====================> ] 108.6MB/257.9MB 11:46:27 eabd8714fec9 Extracting [===========================> ] 205.6MB/375MB 11:46:27 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 11:46:27 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 11:46:27 c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB 11:46:27 eabd8714fec9 Extracting [============================> ] 213.4MB/375MB 11:46:27 55f2b468da67 Extracting [======================> ] 115.3MB/257.9MB 11:46:27 c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 11:46:27 eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 11:46:27 6ac0e4adf315 Pull complete 11:46:27 f90c8eb4724c Pull complete 11:46:27 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 11:46:27 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB 11:46:27 eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 11:46:27 55f2b468da67 Extracting [=======================> ] 121.4MB/257.9MB 11:46:27 eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 11:46:27 55f2b468da67 Extracting [========================> ] 125.9MB/257.9MB 11:46:27 eabd8714fec9 Extracting [===============================> ] 232.8MB/375MB 11:46:27 55f2b468da67 Extracting [=========================> ] 132MB/257.9MB 11:46:27 55f2b468da67 Extracting [==========================> ] 135.4MB/257.9MB 11:46:27 eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB 11:46:27 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 11:46:28 2b1b549e99de Extracting [====> ] 229.4kB/2.646MB 11:46:28 eabd8714fec9 Extracting [===============================> ] 239MB/375MB 11:46:28 55f2b468da67 Extracting [==========================> ] 138.1MB/257.9MB 11:46:28 55f2b468da67 Extracting [===========================> ] 143.7MB/257.9MB 11:46:28 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB 11:46:28 eabd8714fec9 Extracting [================================> ] 241.2MB/375MB 11:46:28 55f2b468da67 Extracting [============================> ] 147.6MB/257.9MB 11:46:28 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 11:46:28 eabd8714fec9 Extracting [================================> ] 245.1MB/375MB 11:46:28 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 11:46:28 eabd8714fec9 Extracting [=================================> ] 250.1MB/375MB 11:46:28 55f2b468da67 Extracting [==============================> ] 154.9MB/257.9MB 11:46:28 eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 11:46:28 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB 11:46:28 eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB 11:46:28 55f2b468da67 Extracting [==============================> ] 159.3MB/257.9MB 11:46:28 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 11:46:28 eabd8714fec9 Extracting [==================================> ] 261.3MB/375MB 11:46:28 f3b09c502777 Extracting [====> ] 5.571MB/56.52MB 11:46:28 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB 11:46:28 c49e0ee60bfb Pull complete 11:46:28 e73cb4a42719 Pull complete 11:46:28 eabd8714fec9 Extracting [===================================> ] 266.3MB/375MB 11:46:29 55f2b468da67 Extracting [================================> ] 167.7MB/257.9MB 11:46:29 f3b09c502777 Extracting [========> ] 9.47MB/56.52MB 11:46:29 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 11:46:29 f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB 11:46:29 2b1b549e99de Pull complete 11:46:29 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 11:46:29 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 11:46:29 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 11:46:29 f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB 11:46:29 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 11:46:29 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 11:46:29 f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB 11:46:29 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 11:46:29 f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 11:46:29 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB 11:46:29 eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB 11:46:29 384497dbce3b Extracting [> ] 557.1kB/63.48MB 11:46:29 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 11:46:29 f3b09c502777 Extracting [==================> ] 21.17MB/56.52MB 11:46:29 547372ea8ffa Extracting [================> ] 4.194MB/12.63MB 11:46:29 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 11:46:29 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB 11:46:29 f3b09c502777 Extracting [======================> ] 25.07MB/56.52MB 11:46:29 384497dbce3b Extracting [> ] 1.114MB/63.48MB 11:46:29 547372ea8ffa Extracting [================================> ] 8.126MB/12.63MB 11:46:30 f3b09c502777 Extracting [========================> ] 27.3MB/56.52MB 11:46:30 547372ea8ffa Extracting [============================================> ] 11.27MB/12.63MB 11:46:30 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 11:46:30 f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 11:46:30 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 11:46:30 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 11:46:30 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 11:46:30 f3b09c502777 Extracting [====================================> ] 41.22MB/56.52MB 11:46:30 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 11:46:30 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 11:46:30 eabd8714fec9 Extracting [====================================> ] 273MB/375MB 11:46:30 a83b68436f09 Pull complete 11:46:30 eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 11:46:30 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 11:46:30 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 11:46:30 eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 11:46:30 787d6bee9571 Extracting [==================================================>] 127B/127B 11:46:30 787d6bee9571 Extracting [==================================================>] 127B/127B 11:46:30 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 11:46:30 547372ea8ffa Pull complete 11:46:30 f3b09c502777 Pull complete 11:46:30 eabd8714fec9 Extracting [====================================> ] 277.4MB/375MB 11:46:30 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 11:46:30 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 11:46:31 eabd8714fec9 Extracting [=====================================> ] 280.2MB/375MB 11:46:31 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB 11:46:31 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 11:46:31 eabd8714fec9 Extracting [=====================================> ] 284.1MB/375MB 11:46:31 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB 11:46:31 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 11:46:31 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 11:46:31 eabd8714fec9 Extracting [======================================> ] 287.4MB/375MB 11:46:31 55f2b468da67 Extracting [====================================> ] 186.6MB/257.9MB 11:46:31 65d25c0f02f3 Extracting [========> ] 5.014MB/28.98MB 11:46:31 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 11:46:31 eabd8714fec9 Extracting [======================================> ] 291.3MB/375MB 11:46:31 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB 11:46:31 65d25c0f02f3 Extracting [==============> ] 8.258MB/28.98MB 11:46:31 eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB 11:46:31 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 11:46:31 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB 11:46:31 65d25c0f02f3 Extracting [==================> ] 10.91MB/28.98MB 11:46:31 384497dbce3b Extracting [========> ] 10.58MB/63.48MB 11:46:31 eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 11:46:31 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB 11:46:31 65d25c0f02f3 Extracting [========================> ] 14.45MB/28.98MB 11:46:31 65d25c0f02f3 Extracting [================================> ] 18.58MB/28.98MB 11:46:31 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 11:46:31 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 11:46:31 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 11:46:31 65d25c0f02f3 Extracting [======================================> ] 22.12MB/28.98MB 11:46:31 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 11:46:31 384497dbce3b Extracting [==========> ] 13.93MB/63.48MB 11:46:31 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 11:46:31 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB 11:46:32 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 11:46:32 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 11:46:32 408012a7b118 Extracting [==================================================>] 637B/637B 11:46:32 408012a7b118 Extracting [==================================================>] 637B/637B 11:46:32 eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB 11:46:32 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 11:46:32 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 11:46:32 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 11:46:32 384497dbce3b Extracting [===============> ] 19.5MB/63.48MB 11:46:32 eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 11:46:32 787d6bee9571 Pull complete 11:46:32 384497dbce3b Extracting [=================> ] 22.28MB/63.48MB 11:46:32 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB 11:46:32 eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB 11:46:32 384497dbce3b Extracting [===================> ] 25.07MB/63.48MB 11:46:32 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 11:46:32 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB 11:46:32 384497dbce3b Extracting [======================> ] 28.41MB/63.48MB 11:46:33 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 11:46:33 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 11:46:33 13ff0988aaea Extracting [==================================================>] 167B/167B 11:46:33 13ff0988aaea Extracting [==================================================>] 167B/167B 11:46:33 384497dbce3b Extracting [========================> ] 31.2MB/63.48MB 11:46:33 eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 11:46:33 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 11:46:33 384497dbce3b Extracting [=========================> ] 32.87MB/63.48MB 11:46:33 eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB 11:46:33 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB 11:46:33 384497dbce3b Extracting [============================> ] 35.65MB/63.48MB 11:46:33 384497dbce3b Extracting [==============================> ] 38.99MB/63.48MB 11:46:33 384497dbce3b Extracting [=================================> ] 42.89MB/63.48MB 11:46:33 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 11:46:33 eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB 11:46:33 65d25c0f02f3 Pull complete 11:46:33 384497dbce3b Extracting [===================================> ] 45.68MB/63.48MB 11:46:33 408012a7b118 Pull complete 11:46:33 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 11:46:33 384497dbce3b Extracting [=====================================> ] 47.35MB/63.48MB 11:46:33 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 11:46:33 55f2b468da67 Extracting [=========================================> ] 215MB/257.9MB 11:46:33 384497dbce3b Extracting [======================================> ] 48.46MB/63.48MB 11:46:33 90dd78f85976 Extracting [> ] 426kB/41.49MB 11:46:33 eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB 11:46:33 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB 11:46:33 384497dbce3b Extracting [=======================================> ] 50.14MB/63.48MB 11:46:34 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 11:46:34 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 11:46:34 90dd78f85976 Extracting [==> ] 2.13MB/41.49MB 11:46:34 55f2b468da67 Extracting [==========================================> ] 218.4MB/257.9MB 11:46:34 eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 11:46:34 384497dbce3b Extracting [=======================================> ] 50.69MB/63.48MB 11:46:34 90dd78f85976 Extracting [=====> ] 4.26MB/41.49MB 11:46:34 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB 11:46:34 eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 11:46:34 384497dbce3b Extracting [=========================================> ] 52.92MB/63.48MB 11:46:34 90dd78f85976 Extracting [==========> ] 8.52MB/41.49MB 11:46:34 90dd78f85976 Extracting [===============> ] 12.78MB/41.49MB 11:46:34 eabd8714fec9 Extracting [==========================================> ] 320.3MB/375MB 11:46:34 13ff0988aaea Pull complete 11:46:34 90dd78f85976 Extracting [====================> ] 17.04MB/41.49MB 11:46:34 55f2b468da67 Extracting [===========================================> ] 222.3MB/257.9MB 11:46:34 eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 11:46:34 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 11:46:34 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 11:46:34 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 11:46:34 90dd78f85976 Extracting [======================> ] 18.74MB/41.49MB 11:46:34 55f2b468da67 Extracting [===========================================> ] 224.5MB/257.9MB 11:46:34 eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 11:46:34 90dd78f85976 Extracting [================================> ] 27.26MB/41.49MB 11:46:34 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB 11:46:34 eabd8714fec9 Extracting [===========================================> ] 325.9MB/375MB 11:46:34 90dd78f85976 Extracting [========================================> ] 33.65MB/41.49MB 11:46:35 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB 11:46:35 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 11:46:35 90dd78f85976 Extracting [==============================================> ] 38.34MB/41.49MB 11:46:35 44986281b8b9 Pull complete 11:46:35 eabd8714fec9 Extracting [===========================================> ] 326.4MB/375MB 11:46:35 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 11:46:35 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 11:46:35 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 11:46:35 90dd78f85976 Extracting [=================================================> ] 40.89MB/41.49MB 11:46:35 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB 11:46:35 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 11:46:35 eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 11:46:35 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 11:46:35 4b82842ab819 Pull complete 11:46:35 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 11:46:35 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 11:46:35 55f2b468da67 Extracting [============================================> ] 230.1MB/257.9MB 11:46:35 eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 11:46:35 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 11:46:35 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 11:46:35 7e568a0dc8fb Extracting [==================================================>] 184B/184B 11:46:35 7e568a0dc8fb Extracting [==================================================>] 184B/184B 11:46:35 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 11:46:35 eabd8714fec9 Extracting [============================================> ] 332MB/375MB 11:46:36 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 11:46:36 eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB 11:46:36 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB 11:46:36 eabd8714fec9 Extracting [============================================> ] 335.9MB/375MB 11:46:36 bf70c5107ab5 Pull complete 11:46:36 90dd78f85976 Pull complete 11:46:36 384497dbce3b Pull complete 11:46:36 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB 11:46:36 eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB 11:46:36 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 11:46:36 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB 11:46:36 7e568a0dc8fb Pull complete 11:46:36 4f4fb700ef54 Extracting [==================================================>] 32B/32B 11:46:36 4f4fb700ef54 Extracting [==================================================>] 32B/32B 11:46:37 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 11:46:37 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 11:46:37 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 11:46:37 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 11:46:37 55f2b468da67 Extracting [================================================> ] 249.6MB/257.9MB 11:46:37 55f2b468da67 Extracting [=================================================> ] 255.7MB/257.9MB 11:46:37 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 11:46:37 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 11:46:37 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 11:46:37 eabd8714fec9 Extracting [=============================================> ] 344.8MB/375MB 11:46:38 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 11:46:38 eabd8714fec9 Extracting [===============================================> ] 352.6MB/375MB 11:46:38 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 11:46:38 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 11:46:38 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 11:46:38 4f4fb700ef54 Pull complete 11:46:38 055b9255fa03 Pull complete 11:46:38 eabd8714fec9 Extracting [===============================================> ] 358.2MB/375MB 11:46:39 eabd8714fec9 Extracting [=================================================> ] 367.7MB/375MB 11:46:39 eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB 11:46:39 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 11:46:39 1ccde423731d Pull complete 11:46:39 55f2b468da67 Pull complete 11:46:39 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 11:46:39 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 11:46:39 postgres Pulled 11:46:39 7221d93db8a9 Extracting [==================================================>] 100B/100B 11:46:39 7221d93db8a9 Extracting [==================================================>] 100B/100B 11:46:39 eabd8714fec9 Pull complete 11:46:39 82bfc142787e Extracting [> ] 98.3kB/8.613MB 11:46:39 opa-pdp Pulled 11:46:40 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 11:46:40 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 11:46:40 82bfc142787e Extracting [===============> ] 2.654MB/8.613MB 11:46:40 7221d93db8a9 Pull complete 11:46:40 b176d7edde70 Pull complete 11:46:40 7df673c7455d Extracting [==================================================>] 694B/694B 11:46:40 7df673c7455d Extracting [==================================================>] 694B/694B 11:46:40 45fd2fec8a19 Pull complete 11:46:40 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 11:46:40 grafana Pulled 11:46:40 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 11:46:40 82bfc142787e Pull complete 11:46:40 8f10199ed94b Extracting [======> ] 1.18MB/8.768MB 11:46:40 7df673c7455d Pull complete 11:46:40 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 11:46:40 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 11:46:40 prometheus Pulled 11:46:40 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 11:46:40 8f10199ed94b Pull complete 11:46:40 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 11:46:40 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 11:46:40 46baca71a4ef Pull complete 11:46:40 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 11:46:40 f963a77d2726 Pull complete 11:46:40 b0e0ef7895f4 Extracting [=====================> ] 15.73MB/37.01MB 11:46:40 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 11:46:40 b0e0ef7895f4 Extracting [==========================================> ] 31.46MB/37.01MB 11:46:40 f3a82e9f1761 Extracting [===============> ] 13.76MB/44.41MB 11:46:40 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 11:46:40 b0e0ef7895f4 Pull complete 11:46:40 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 11:46:40 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 11:46:40 f3a82e9f1761 Extracting [===========================> ] 24.77MB/44.41MB 11:46:40 f3a82e9f1761 Extracting [==============================================> ] 41.29MB/44.41MB 11:46:40 c0c90eeb8aca Pull complete 11:46:40 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 11:46:40 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 11:46:40 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 11:46:40 f3a82e9f1761 Pull complete 11:46:40 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 11:46:40 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 11:46:41 5cfb27c10ea5 Pull complete 11:46:41 40a5eed61bb0 Extracting [==================================================>] 98B/98B 11:46:41 40a5eed61bb0 Extracting [==================================================>] 98B/98B 11:46:41 79161a3f5362 Pull complete 11:46:41 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 11:46:41 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 11:46:41 40a5eed61bb0 Pull complete 11:46:41 e040ea11fa10 Extracting [==================================================>] 173B/173B 11:46:41 e040ea11fa10 Extracting [==================================================>] 173B/173B 11:46:41 9c266ba63f51 Pull complete 11:46:41 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 11:46:41 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 11:46:41 e040ea11fa10 Pull complete 11:46:41 2e8a7df9c2ee Pull complete 11:46:41 10f05dd8b1db Extracting [==================================================>] 98B/98B 11:46:41 10f05dd8b1db Extracting [==================================================>] 98B/98B 11:46:41 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 11:46:41 09d5a3f70313 Extracting [======> ] 15.04MB/109.2MB 11:46:41 10f05dd8b1db Pull complete 11:46:41 41dac8b43ba6 Extracting [==================================================>] 171B/171B 11:46:41 41dac8b43ba6 Extracting [==================================================>] 171B/171B 11:46:41 09d5a3f70313 Extracting [===========> ] 25.07MB/109.2MB 11:46:41 41dac8b43ba6 Pull complete 11:46:41 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 11:46:41 09d5a3f70313 Extracting [==================> ] 41.22MB/109.2MB 11:46:41 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 11:46:41 09d5a3f70313 Extracting [==========================> ] 57.38MB/109.2MB 11:46:41 71a9f6a9ab4d Pull complete 11:46:41 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 11:46:41 09d5a3f70313 Extracting [===============================> ] 68.52MB/109.2MB 11:46:42 da3ed5db7103 Extracting [=====> ] 12.81MB/127.4MB 11:46:42 09d5a3f70313 Extracting [========================================> ] 87.46MB/109.2MB 11:46:42 da3ed5db7103 Extracting [==========> ] 27.85MB/127.4MB 11:46:42 09d5a3f70313 Extracting [===============================================> ] 104.2MB/109.2MB 11:46:42 da3ed5db7103 Extracting [==================> ] 47.35MB/127.4MB 11:46:42 09d5a3f70313 Extracting [=================================================> ] 108.6MB/109.2MB 11:46:42 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 11:46:42 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 11:46:42 09d5a3f70313 Pull complete 11:46:42 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 11:46:42 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 11:46:42 da3ed5db7103 Extracting [==========================> ] 66.85MB/127.4MB 11:46:42 da3ed5db7103 Extracting [================================> ] 83.56MB/127.4MB 11:46:42 356f5c2c843b Pull complete 11:46:42 kafka Pulled 11:46:42 da3ed5db7103 Extracting [======================================> ] 97.48MB/127.4MB 11:46:42 da3ed5db7103 Extracting [=============================================> ] 114.8MB/127.4MB 11:46:42 da3ed5db7103 Extracting [================================================> ] 122.6MB/127.4MB 11:46:42 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 11:46:42 da3ed5db7103 Pull complete 11:46:42 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 11:46:42 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 11:46:43 c955f6e31a04 Pull complete 11:46:43 zookeeper Pulled 11:46:43 Network compose_default Creating 11:46:43 Network compose_default Created 11:46:43 Container zookeeper Creating 11:46:43 Container postgres Creating 11:46:43 Container prometheus Creating 11:46:53 Container prometheus Created 11:46:53 Container grafana Creating 11:46:53 Container zookeeper Created 11:46:53 Container kafka Creating 11:46:53 Container postgres Created 11:46:53 Container policy-db-migrator Creating 11:46:53 Container policy-db-migrator Created 11:46:53 Container grafana Created 11:46:53 Container policy-api Creating 11:46:53 Container kafka Created 11:46:53 Container policy-api Created 11:46:53 Container policy-pap Creating 11:46:53 Container policy-pap Created 11:46:53 Container policy-opa-pdp Creating 11:46:53 Container policy-opa-pdp Created 11:46:53 Container postgres Starting 11:46:53 Container zookeeper Starting 11:46:53 Container prometheus Starting 11:46:54 Container zookeeper Started 11:46:54 Container kafka Starting 11:46:55 Container kafka Started 11:46:55 Container prometheus Started 11:46:55 Container grafana Starting 11:46:56 Container grafana Started 11:46:56 Container postgres Started 11:46:56 Container policy-db-migrator Starting 11:46:57 Container policy-db-migrator Started 11:46:57 Container policy-api Starting 11:46:58 Container policy-api Started 11:46:58 Container policy-pap Starting 11:46:59 Container policy-pap Started 11:46:59 Container policy-opa-pdp Starting 11:47:00 Container policy-opa-pdp Started 11:47:00 Prometheus server: http://localhost:30259 11:47:00 Grafana server: http://localhost:30269 11:47:00 Waiting 3 minutes for OPA-PDP to start... 11:50:00 Checking if REST port 30003 is open on localhost ... 11:50:00 IMAGE NAMES STATUS 11:50:00 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 11:50:00 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 11:50:00 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 11:50:00 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 11:50:00 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 11:50:00 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 11:50:00 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 11:50:00 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 11:50:00 Checking if REST port 30012 is open on localhost ... 11:50:00 IMAGE NAMES STATUS 11:50:00 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 11:50:00 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 11:50:00 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 11:50:00 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 11:50:00 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 11:50:00 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 11:50:00 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 11:50:00 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 11:50:00 Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/resources/tests/models'... 11:50:01 Building robot framework docker image 11:50:40 sha256:b2b7e59b5413e7ce75c7780716a86928d7eb5cf3ffb89eabdd76d35aceae0773 11:50:43 top - 11:50:43 up 6 min, 0 users, load average: 1.34, 1.22, 0.61 11:50:43 Tasks: 219 total, 1 running, 148 sleeping, 0 stopped, 0 zombie 11:50:43 %Cpu(s): 10.1 us, 2.4 sy, 0.0 ni, 84.6 id, 2.8 wa, 0.0 hi, 0.1 si, 0.1 st 11:50:43 11:50:43 total used free shared buff/cache available 11:50:43 Mem: 31G 2.3G 21G 28M 7.3G 28G 11:50:43 Swap: 1.0G 0B 1.0G 11:50:43 11:50:43 IMAGE NAMES STATUS 11:50:43 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 11:50:43 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 11:50:43 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 11:50:43 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 11:50:43 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 11:50:43 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 11:50:43 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 11:50:43 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 11:50:43 11:50:46 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11:50:46 fa5a1d9d9cb2 policy-opa-pdp 0.16% 12.88MiB / 31.41GiB 0.04% 82.3kB / 79.4kB 0B / 0B 21 11:50:46 faf81eb6f54d policy-pap 2.35% 478.9MiB / 31.41GiB 1.49% 2.21MB / 1.27MB 0B / 139MB 67 11:50:46 9f4cd0b37f12 policy-api 0.13% 423.8MiB / 31.41GiB 1.32% 1.15MB / 1.08MB 0B / 0B 60 11:50:46 d602d4f6f729 kafka 2.28% 388.3MiB / 31.41GiB 1.21% 310kB / 294kB 0B / 737kB 83 11:50:46 dca3c6f646bb grafana 0.12% 115.2MiB / 31.41GiB 0.36% 19.4MB / 217kB 0B / 30.7MB 23 11:50:46 20d9f1509fc8 zookeeper 0.09% 84.9MiB / 31.41GiB 0.26% 57.8kB / 50.6kB 102kB / 410kB 62 11:50:46 80da9c5523d4 prometheus 0.09% 20.57MiB / 31.41GiB 0.06% 276kB / 11.9kB 0B / 0B 12 11:50:46 1f572868cb96 postgres 0.02% 85.64MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 0B / 158MB 26 11:50:46 11:50:46 Container policy-csit Creating 11:50:46 Container policy-csit Created 11:50:46 Attaching to policy-csit 11:50:47 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 11:50:47 policy-csit | Run Robot test 11:50:47 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 11:50:47 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 11:50:47 policy-csit | -v POLICY_API_IP:policy-api:6969 11:50:47 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 11:50:47 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 11:50:47 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 11:50:47 policy-csit | -v APEX_IP:policy-apex-pdp:6969 11:50:47 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 11:50:47 policy-csit | -v KAFKA_IP:kafka:9092 11:50:47 policy-csit | -v PROMETHEUS_IP:prometheus:9090 11:50:47 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 11:50:47 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 11:50:47 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 11:50:47 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 11:50:47 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 11:50:47 policy-csit | -v TEMP_FOLDER:/tmp/distribution 11:50:47 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 11:50:47 policy-csit | -v TEST_ENV:docker 11:50:47 policy-csit | -v JAEGER_IP:jaeger:16686 11:50:47 policy-csit | Starting Robot test suites ... 11:50:47 policy-csit | ============================================================================== 11:50:47 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 11:50:47 policy-csit | ============================================================================== 11:50:47 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 11:50:47 policy-csit | ============================================================================== 11:50:47 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 11:50:47 policy-csit | ------------------------------------------------------------------------------ 11:50:47 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 11:50:47 policy-csit | ------------------------------------------------------------------------------ 11:51:13 policy-csit | ValidatesZonePolicy | PASS | 11:51:13 policy-csit | ------------------------------------------------------------------------------ 11:51:39 policy-csit | ValidatesVehiclePolicy | PASS | 11:51:39 policy-csit | ------------------------------------------------------------------------------ 11:52:05 policy-csit | ValidatesAbacPolicy | PASS | 11:52:05 policy-csit | ------------------------------------------------------------------------------ 11:52:05 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 11:52:05 policy-csit | 5 tests, 5 passed, 0 failed 11:52:05 policy-csit | ============================================================================== 11:52:05 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 11:52:05 policy-csit | ============================================================================== 11:53:05 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 11:53:05 policy-csit | ------------------------------------------------------------------------------ 11:53:05 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 11:53:05 policy-csit | ------------------------------------------------------------------------------ 11:53:05 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 11:53:05 policy-csit | ------------------------------------------------------------------------------ 11:53:05 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 11:53:05 policy-csit | ------------------------------------------------------------------------------ 11:53:05 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 11:53:05 policy-csit | ------------------------------------------------------------------------------ 11:53:05 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 11:53:05 policy-csit | 5 tests, 5 passed, 0 failed 11:53:05 policy-csit | ============================================================================== 11:53:05 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 11:53:05 policy-csit | 10 tests, 10 passed, 0 failed 11:53:05 policy-csit | ============================================================================== 11:53:05 policy-csit | Output: /tmp/results/output.xml 11:53:05 policy-csit | Log: /tmp/results/log.html 11:53:05 policy-csit | Report: /tmp/results/report.html 11:53:05 policy-csit | RESULT: 0 11:53:05 policy-csit exited with code 0 11:53:05 IMAGE NAMES STATUS 11:53:05 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes 11:53:05 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes 11:53:05 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes 11:53:05 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes 11:53:05 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes 11:53:05 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes 11:53:05 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes 11:53:05 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes 11:53:05 Shut down started! 11:53:07 Collecting logs from docker compose containers... 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.9791251Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-17T11:46:56Z 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979573824Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979583574Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979588924Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979594794Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979599204Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979603754Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979608294Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979613014Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979618994Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979623434Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979627864Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979632414Z level=info msg=Target target=[all] 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979642374Z level=info msg="Path Home" path=/usr/share/grafana 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979647734Z level=info msg="Path Data" path=/var/lib/grafana 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979652104Z level=info msg="Path Logs" path=/var/log/grafana 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979656214Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979660624Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 11:53:07 grafana | logger=settings t=2025-06-17T11:46:56.979664634Z level=info msg="App mode production" 11:53:07 grafana | logger=featuremgmt t=2025-06-17T11:46:56.980236289Z level=info msg=FeatureToggles addFieldFromCalculationStatFunctions=true alertingApiServer=true cloudWatchCrossAccountQuerying=true dataplaneFrontendFallback=true annotationPermissionUpdate=true newDashboardSharingComponent=true promQLScope=true unifiedRequestLog=true lokiQueryHints=true alertingUIOptimizeReducer=true groupToNestedTableTransformation=true alertingRuleRecoverDeleted=true pluginsDetailsRightPanel=true onPremToCloudMigrations=true alertingRuleVersionHistoryRestore=true logsInfiniteScrolling=true correlations=true dashboardScene=true azureMonitorPrometheusExemplars=true newPDFRendering=true azureMonitorEnableUserAuth=true panelMonitoring=true alertRuleRestore=true influxdbBackendMigration=true tlsMemcached=true dashgpt=true lokiStructuredMetadata=true pinNavItems=true transformationsRedesign=true useSessionStorageForRedirection=true recoveryThreshold=true publicDashboardsScene=true kubernetesClientDashboardsFolders=true failWrongDSUID=true logsPanelControls=true grafanaconThemes=true nestedFolders=true prometheusUsesCombobox=true cloudWatchNewLabelParsing=true preinstallAutoUpdate=true lokiLabelNamesQueryApi=true logRowsPopoverMenu=true logsContextDatasourceUi=true unifiedStorageSearchPermissionFiltering=true externalCorePlugins=true alertingInsights=true alertingSimplifiedRouting=true awsAsyncQueryCaching=true dashboardSceneSolo=true alertingNotificationsStepMode=true formatString=true cloudWatchRoundUpEndTime=true recordedQueriesMulti=true lokiQuerySplitting=true reportingUseRawTimeRange=true logsExploreTableVisualisation=true dashboardSceneForViewers=true ssoSettingsSAML=true kubernetesPlaylists=true prometheusAzureOverrideAudience=true alertingQueryAndExpressionsStepMode=true angularDeprecationUI=true ssoSettingsApi=true alertingRulePermanentlyDelete=true newFiltersUI=true 11:53:07 grafana | logger=sqlstore t=2025-06-17T11:46:56.98030696Z level=info msg="Connecting to DB" dbtype=sqlite3 11:53:07 grafana | logger=sqlstore t=2025-06-17T11:46:56.98032507Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:56.98259567Z level=info msg="Locking database" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:56.98260972Z level=info msg="Starting DB migrations" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:56.983531368Z level=info msg="Executing migration" id="create migration_log table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:56.984641819Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.109891ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.0074701Z level=info msg="Executing migration" id="create user table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.008492689Z level=info msg="Migration successfully executed" id="create user table" duration=1.016128ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.012743536Z level=info msg="Executing migration" id="add unique index user.login" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.013493042Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=751.886µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.018689837Z level=info msg="Executing migration" id="add unique index user.email" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.019206081Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=518.164µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.022088196Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.022735672Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=647.036µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.025735597Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.026410504Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=674.047µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.031355446Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.033696256Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.33938ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.036714832Z level=info msg="Executing migration" id="create user table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.03751593Z level=info msg="Migration successfully executed" id="create user table v2" duration=796.498µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.040807178Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.041568915Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=761.156µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.046398907Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.047131653Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=732.366µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.051052586Z level=info msg="Executing migration" id="copy data_source v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.05140641Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=353.854µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.054237075Z level=info msg="Executing migration" id="Drop old table user_v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.054730829Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=492.284µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.058499291Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.059561561Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.06204ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.066132637Z level=info msg="Executing migration" id="Update user table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.066162287Z level=info msg="Migration successfully executed" id="Update user table charset" duration=30.71µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.073053136Z level=info msg="Executing migration" id="Add last_seen_at column to user" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.074206907Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.154141ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.077861698Z level=info msg="Executing migration" id="Add missing user data" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.07801088Z level=info msg="Migration successfully executed" id="Add missing user data" duration=149.262µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.083639448Z level=info msg="Executing migration" id="Add is_disabled column to user" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.084604356Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=965.048µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.090249795Z level=info msg="Executing migration" id="Add index user.login/user.email" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.091073773Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=824.208µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.104799801Z level=info msg="Executing migration" id="Add is_service_account column to user" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.106228673Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.428702ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.110346919Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.118273197Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.925568ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.128028822Z level=info msg="Executing migration" id="Add uid column to user" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.129140291Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.111659ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.133994803Z level=info msg="Executing migration" id="Update uid column values for users" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.134207155Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=212.322µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.139234929Z level=info msg="Executing migration" id="Add unique index user_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.140420188Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.185039ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.144781736Z level=info msg="Executing migration" id="Add is_provisioned column to user" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.146785944Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.003838ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.152172401Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.152633994Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=461.283µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.157482516Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.158211203Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=728.427µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.161518991Z level=info msg="Executing migration" id="update login and email fields to lowercase" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.162046485Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=526.844µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.164823599Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.165164492Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=340.863µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.169230548Z level=info msg="Executing migration" id="create temp user table v1-7" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.170078115Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=846.797µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.173318303Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.174430743Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.11186ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.178314686Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.179981601Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.666785ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.185327956Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.186536167Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.207381ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.190992085Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.191680672Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=688.207µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.195416514Z level=info msg="Executing migration" id="Update temp_user table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.195441914Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=21.94µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.199341317Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.200345086Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.002569ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.215399696Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.216359574Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=959.258µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.220116257Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.220766042Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=648.725µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.226311491Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.226950526Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=638.545µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.230483877Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.233484083Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.999556ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.237959351Z level=info msg="Executing migration" id="create temp_user v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.238776089Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=816.688µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.243069305Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.244020043Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=950.098µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.248998106Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.249706812Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=708.496µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.25519742Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.256231159Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.035469ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.260790118Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.261623326Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=832.578µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.267661388Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.268011711Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=348.913µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.272160356Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.27257116Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=411.254µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.276206371Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.276531774Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=325.013µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.281125614Z level=info msg="Executing migration" id="create star table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.28178161Z level=info msg="Migration successfully executed" id="create star table" duration=655.686µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.286127846Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.287770191Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.642005ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.291881736Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.293027317Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.146731ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.296086793Z level=info msg="Executing migration" id="Add column org_id in star" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.297695947Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.608474ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.302969992Z level=info msg="Executing migration" id="Add column updated in star" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.305915738Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.944706ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.308991775Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.310148354Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.155799ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.328445183Z level=info msg="Executing migration" id="create org table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.330132227Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.683974ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.334773687Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.336560542Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.789335ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.343034859Z level=info msg="Executing migration" id="create org_user table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.344238639Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.152749ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.347636588Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.348353415Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=716.667µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.351537372Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.352691962Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.15746ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.356614016Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.357734015Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.118769ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.364584404Z level=info msg="Executing migration" id="Update org table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.364611644Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.06µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.366697532Z level=info msg="Executing migration" id="Update org_user table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.366732043Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=35.651µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.370587497Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.370890119Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=301.902µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.374773322Z level=info msg="Executing migration" id="create dashboard table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.375960512Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.18694ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.381952325Z level=info msg="Executing migration" id="add index dashboard.account_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.383606159Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.652684ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.388866674Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.38962394Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=756.916µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.393255823Z level=info msg="Executing migration" id="create dashboard_tag table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.393917298Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=661.115µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.399618247Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.400824627Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.20569ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.403568511Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.404241807Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=673.266µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.407684707Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.41268795Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.002863ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.418736642Z level=info msg="Executing migration" id="create dashboard v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.420334276Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.590004ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.454901155Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.455745252Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=844.827µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.459831407Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.460674615Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=842.998µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.464719619Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.465096272Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=376.093µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.469803693Z level=info msg="Executing migration" id="drop table dashboard_v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.470695891Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=891.438µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.473710527Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.473735977Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=26.97µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.479364975Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.48226393Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.899745ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.487748058Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.489579214Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.830386ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.493389307Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.495256653Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.866696ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.50295369Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.503760516Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=808.116µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.509762738Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.511723565Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.960157ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.518250841Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.51918529Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=934.229µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.523231215Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.524016281Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=784.666µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.668346558Z level=info msg="Executing migration" id="Update dashboard table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.668397678Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=58.79µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.739123219Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.739171669Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=50.97µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.793162325Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.796557156Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.392441ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.800884032Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.80290422Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.019488ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.807033705Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.808993072Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.958377ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.811782876Z level=info msg="Executing migration" id="Add column uid in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.813762614Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.943358ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.818237892Z level=info msg="Executing migration" id="Update uid column values in dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.818451514Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=211.762µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.824499617Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.825390124Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=892.097µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.829365198Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.829958534Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=593.086µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.832517785Z level=info msg="Executing migration" id="Update dashboard title length" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.832536325Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=19.08µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.836659061Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.837222566Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=563.365µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.839497946Z level=info msg="Executing migration" id="create dashboard_provisioning" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.83998564Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=487.584µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.842685063Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.84689297Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.207467ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.854693947Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.855329293Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=637.216µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.869997469Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.87128179Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.283921ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.874731601Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.875552858Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=821.177µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.880179217Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.88048604Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=306.303µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.883572787Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.884358263Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=784.926µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.887243298Z level=info msg="Executing migration" id="Add check_sum column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.890335815Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.092617ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.897423816Z level=info msg="Executing migration" id="Add index for dashboard_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.899129401Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.710305ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.904001923Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.904299546Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=298.313µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.910030135Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.910218877Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=188.592µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.913495145Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.914400363Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=905.288µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.920309284Z level=info msg="Executing migration" id="Add isPublic for dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.924350298Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.041844ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.932449208Z level=info msg="Executing migration" id="Add deleted for dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.937337471Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=4.887513ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.941129903Z level=info msg="Executing migration" id="Add index for deleted" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.942000262Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=873.409µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.945089478Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.946800372Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=1.707484ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.950665706Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.95231872Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=1.652384ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.95693877Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.957245503Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=304.243µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.960928664Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.962562889Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=1.633425ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.975695063Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.976292528Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=597.054µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.984235726Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.98473552Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=502.674µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.990274918Z level=info msg="Executing migration" id="create data_source table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.990995994Z level=info msg="Migration successfully executed" id="create data_source table" duration=721.126µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.993633147Z level=info msg="Executing migration" id="add index data_source.account_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:57.994275042Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=641.735µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.000278514Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.00083263Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=556.556µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.004156559Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.00528032Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.123551ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.008641511Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.009455119Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=814.258µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.015222183Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.022094306Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.871483ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.025489438Z level=info msg="Executing migration" id="create data_source table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.026395147Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=905.699µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.029832679Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.030600905Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=767.736µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.034955196Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.035754074Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=796.037µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.039001374Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.039497828Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=495.984µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.045032169Z level=info msg="Executing migration" id="Add column with_credentials" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.048893836Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.857827ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.052278147Z level=info msg="Executing migration" id="Add secure json data column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.054971072Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.693085ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.057883269Z level=info msg="Executing migration" id="Update data_source table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.05790813Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=25.571µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.062712695Z level=info msg="Executing migration" id="Update initial version to 1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.062936057Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=223.142µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.066118416Z level=info msg="Executing migration" id="Add read_only data column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.068491968Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.373312ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.085471406Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.085828659Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=356.733µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.089693185Z level=info msg="Executing migration" id="Update json_data with nulls" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.089994597Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=301.432µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.09566242Z level=info msg="Executing migration" id="Add uid column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.098100153Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.436583ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.102405163Z level=info msg="Executing migration" id="Update uid value" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.102580985Z level=info msg="Migration successfully executed" id="Update uid value" duration=173.532µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.105998197Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.106994735Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=996.008µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.111923702Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.112737159Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=813.497µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.115760568Z level=info msg="Executing migration" id="Add is_prunable column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.11824642Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.485142ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.121239199Z level=info msg="Executing migration" id="Add api_version column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.124749141Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.509232ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.131494164Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.131549484Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=86.861µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.137048885Z level=info msg="Executing migration" id="create api_key table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.138196476Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.147371ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.141314716Z level=info msg="Executing migration" id="add index api_key.account_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.141884631Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=569.655µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.145186921Z level=info msg="Executing migration" id="add index api_key.key" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.145728636Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=541.565µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.149878035Z level=info msg="Executing migration" id="add index api_key.account_id_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.151089906Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.211281ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.154027993Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.155142774Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.114081ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.158145783Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.159738427Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.591794ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.164739034Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.16550047Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=762.826µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.168071504Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.172813789Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.741715ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.178089929Z level=info msg="Executing migration" id="create api_key table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.178768875Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=678.666µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.181078416Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.181848513Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=769.627µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.184574559Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.185536167Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=961.479µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.204503344Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.206344972Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.839588ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.209920175Z level=info msg="Executing migration" id="copy api_key v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.210268178Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=348.773µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.214558169Z level=info msg="Executing migration" id="Drop old table api_key_v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.215081153Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=522.154µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.220180711Z level=info msg="Executing migration" id="Update api_key table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.220200141Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=19.64µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.224292859Z level=info msg="Executing migration" id="Add expires to api_key table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.226891183Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.597704ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.231207564Z level=info msg="Executing migration" id="Add service account foreign key" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.234799837Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.592073ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.247730938Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.248074121Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=342.613µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.251426772Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.255360498Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.934166ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.258357166Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.261070722Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.713256ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.264751657Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.265478703Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=726.846µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.268442511Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.269014686Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=571.905µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.271325968Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.272137675Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=811.458µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.276346295Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.277419954Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.073009ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.280578154Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.281421792Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=845.668µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.285385009Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.286220296Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=834.847µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.290314515Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.290330415Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=16.68µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.293119961Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.293143051Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=23.46µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.296063879Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.298811994Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.747585ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.311372391Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.316110795Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=4.738174ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.323121851Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.323139081Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=17.93µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.325874447Z level=info msg="Executing migration" id="create quota table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.326600953Z level=info msg="Migration successfully executed" id="create quota table v1" duration=726.376µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.330143896Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.331409029Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.264963ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.335646927Z level=info msg="Executing migration" id="Update quota table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.335682318Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=36.791µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.33915806Z level=info msg="Executing migration" id="create plugin_setting table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.340779655Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.620185ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.344332208Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.345201526Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=866.758µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.349114203Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.352342093Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.2271ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.355709914Z level=info msg="Executing migration" id="Update plugin_setting table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.355734505Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.951µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.359915064Z level=info msg="Executing migration" id="update NULL org_id to 1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.360238117Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=322.553µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.36489831Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.378305465Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=13.406335ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.39590611Z level=info msg="Executing migration" id="create session table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.397504594Z level=info msg="Migration successfully executed" id="create session table" duration=1.598094ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.402707213Z level=info msg="Executing migration" id="Drop old table playlist table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.402837084Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=127.751µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.421201546Z level=info msg="Executing migration" id="Drop old table playlist_item table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.421321777Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=121.101µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.428369632Z level=info msg="Executing migration" id="create playlist table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.429754635Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.384843ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.433909464Z level=info msg="Executing migration" id="create playlist item table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.43457824Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=668.396µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.439393375Z level=info msg="Executing migration" id="Update playlist table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.439415885Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=23.11µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.443668865Z level=info msg="Executing migration" id="Update playlist_item table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.443691745Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=23.64µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.446767604Z level=info msg="Executing migration" id="Add playlist column created_at" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.449876453Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.108159ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.453524337Z level=info msg="Executing migration" id="Add playlist column updated_at" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.45922464Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=5.736674ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.463009486Z level=info msg="Executing migration" id="drop preferences table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.463189097Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=178.732µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.46671537Z level=info msg="Executing migration" id="drop preferences table v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.466923102Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=206.531µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.470617656Z level=info msg="Executing migration" id="create preferences table v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.47211093Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.492544ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.477257318Z level=info msg="Executing migration" id="Update preferences table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.477372919Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=114.701µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.481508607Z level=info msg="Executing migration" id="Add column team_id in preferences" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.486619256Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.111389ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.490296939Z level=info msg="Executing migration" id="Update team_id column values in preferences" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.490530972Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=233.793µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.494700891Z level=info msg="Executing migration" id="Add column week_start in preferences" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.497963791Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.26227ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.501262762Z level=info msg="Executing migration" id="Add column preferences.json_data" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.504584133Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.307871ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.508766152Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.508788322Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=22.81µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.513111492Z level=info msg="Executing migration" id="Add preferences index org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.514015121Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=903.509µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.555866092Z level=info msg="Executing migration" id="Add preferences index user_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.556975791Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.111259ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.566046196Z level=info msg="Executing migration" id="create alert table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.568029244Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.978938ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.574530795Z level=info msg="Executing migration" id="add index alert org_id & id " 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.575561455Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.03343ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.65443518Z level=info msg="Executing migration" id="add index alert state" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.655971814Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.538444ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.663565445Z level=info msg="Executing migration" id="add index alert dashboard_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.664919798Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.354273ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.670520721Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.671480169Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=958.949µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.675939821Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.676767709Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=827.588µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.680308001Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.681067518Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=759.237µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.686146496Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.698969666Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.82078ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.706588726Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.707483135Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=894.719µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.711520213Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.71229325Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=772.638µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.722508795Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.72311997Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=613.025µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.726700685Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.727383531Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=684.476µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.730458379Z level=info msg="Executing migration" id="create alert_notification table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.731239286Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=780.667µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.735603267Z level=info msg="Executing migration" id="Add column is_default" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.739505423Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.901396ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.744259578Z level=info msg="Executing migration" id="Add column frequency" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.748625298Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.36562ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.772851854Z level=info msg="Executing migration" id="Add column send_reminder" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.7766753Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.828986ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.781208002Z level=info msg="Executing migration" id="Add column disable_resolve_message" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.783799217Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.590925ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.786916045Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.787716343Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=799.688µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.796832049Z level=info msg="Executing migration" id="Update alert table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.796873649Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=43.18µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.802762084Z level=info msg="Executing migration" id="Update alert_notification table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.802800794Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=40.03µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.812458994Z level=info msg="Executing migration" id="create notification_journal table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.814322711Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.864977ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.821940023Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.824125262Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=2.185969ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.828664805Z level=info msg="Executing migration" id="drop alert_notification_journal" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.829774365Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.10926ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.83452158Z level=info msg="Executing migration" id="create alert_notification_state table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.835085825Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=564.135µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.839576037Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.840259613Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=683.006µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.844814056Z level=info msg="Executing migration" id="Add for to alert table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.848846783Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.029097ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.852280705Z level=info msg="Executing migration" id="Add column uid in alert_notification" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.855736158Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.452233ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.86892606Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.869360045Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=431.985µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.874880056Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.876511181Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.629705ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.882698539Z level=info msg="Executing migration" id="Remove unique index org_id_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.883634698Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=936.889µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.888298061Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.892141607Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.842886ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.900066721Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.900118162Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=55.271µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.904337621Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.905040417Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=702.796µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.907229668Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.907839413Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=609.315µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.912183064Z level=info msg="Executing migration" id="Drop old annotation table v4" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.912273905Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=91.051µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.916238232Z level=info msg="Executing migration" id="create annotation table v5" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.917146221Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=907.089µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.920578202Z level=info msg="Executing migration" id="add index annotation 0 v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.92144007Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=860.748µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.924960633Z level=info msg="Executing migration" id="add index annotation 1 v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.926015303Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.054ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.930724046Z level=info msg="Executing migration" id="add index annotation 2 v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.932223831Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.503785ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.936440181Z level=info msg="Executing migration" id="add index annotation 3 v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.938030025Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.590404ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.942446886Z level=info msg="Executing migration" id="add index annotation 4 v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.943434035Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=986.919µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.947889617Z level=info msg="Executing migration" id="Update annotation table charset" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.947913687Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.59µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.951879384Z level=info msg="Executing migration" id="Add column region_id to annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.957070603Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.190029ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.96114442Z level=info msg="Executing migration" id="Drop category_id index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.961945208Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=800.478µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:58.994803134Z level=info msg="Executing migration" id="Add column tags to annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.001109924Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.307379ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.004240552Z level=info msg="Executing migration" id="Create annotation_tag table v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.004907358Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=664.576µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.008209977Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.009074305Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=863.888µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.013729187Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.014483253Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=753.856µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.01744582Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.029666239Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.219349ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.033027779Z level=info msg="Executing migration" id="Create annotation_tag table v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.033822596Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=794.477µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.037832713Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.038452228Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=621.825µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.04322604Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.043653364Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=427.364µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.046830252Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.047618899Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=787.947µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.053088818Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.053269279Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=180.281µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.057392466Z level=info msg="Executing migration" id="Add created time to annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.062800345Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.407089ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.137773482Z level=info msg="Executing migration" id="Add updated time to annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.141600427Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.825355ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.148119364Z level=info msg="Executing migration" id="Add index for created in annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.149039692Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=920.348µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.153328141Z level=info msg="Executing migration" id="Add index for updated in annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.154534851Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.20639ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.159240564Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.159466626Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=226.172µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.163609823Z level=info msg="Executing migration" id="Add epoch_end column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.16778009Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.169817ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.174141656Z level=info msg="Executing migration" id="Add index for epoch_end" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.175440898Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.298252ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.21492554Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.215316283Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=389.843µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.219063436Z level=info msg="Executing migration" id="Move region to single row" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.219727303Z level=info msg="Migration successfully executed" id="Move region to single row" duration=660.617µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.224269503Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.225106681Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=836.168µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.247947694Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.249585998Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.638044ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.253938387Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.255201528Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.261071ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.259723319Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.260557446Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=833.907µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.264497672Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.265326459Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=828.247µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.270193132Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.27105261Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=863.808µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.274055016Z level=info msg="Executing migration" id="Increase tags column to length 4096" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.274107017Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=52.101µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.277290595Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.277322925Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=33.01µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.281670504Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.281685035Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=15.161µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.28571516Z level=info msg="Executing migration" id="create test_data table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.286497867Z level=info msg="Migration successfully executed" id="create test_data table" duration=781.987µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.289850718Z level=info msg="Executing migration" id="create dashboard_version table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.290629064Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=778.347µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.293972454Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.294802091Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=829.408µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.299497223Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.300641614Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.143751ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.305343095Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.305824459Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=484.014µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.309817335Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.310567421Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=749.846µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.315896189Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.315921649Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=27.22µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.320117216Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.324593377Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.475641ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.457715593Z level=info msg="Executing migration" id="create team table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.459863731Z level=info msg="Migration successfully executed" id="create team table" duration=2.143598ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.605255597Z level=info msg="Executing migration" id="add index team.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.607325705Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=2.072748ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.663808028Z level=info msg="Executing migration" id="add unique index team_org_id_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.664935589Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.129481ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.739024148Z level=info msg="Executing migration" id="Add column uid in team" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.747873087Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=8.849769ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.754056652Z level=info msg="Executing migration" id="Update uid column values in team" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.754213853Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=157.711µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.758097889Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.759087947Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=989.878µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.764028681Z level=info msg="Executing migration" id="Add column external_uid in team" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.768576532Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.547591ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.77167739Z level=info msg="Executing migration" id="Add column is_provisioned in team" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.776331761Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.653731ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.780293686Z level=info msg="Executing migration" id="create team member table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.781133483Z level=info msg="Migration successfully executed" id="create team member table" duration=842.777µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.784849166Z level=info msg="Executing migration" id="add index team_member.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.785902086Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.0527ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.790836411Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.791809499Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=969.908µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.795835854Z level=info msg="Executing migration" id="add index team_member.team_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.796702363Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=866.209µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.802261192Z level=info msg="Executing migration" id="Add column email to team table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.806923723Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.662281ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.81001294Z level=info msg="Executing migration" id="Add column external to team_member table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.814816304Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.802714ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.818782069Z level=info msg="Executing migration" id="Add column permission to team_member table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.822468302Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.688403ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.840795995Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.841673703Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=879.708µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.844891692Z level=info msg="Executing migration" id="create dashboard acl table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.845492627Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=600.595µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.850058718Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.851207958Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.14829ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.857625445Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.858505213Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=879.608µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.862836461Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.8637397Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=911.199µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.868724134Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.869593282Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=868.968µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.873283335Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.874166012Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=882.788µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.877562522Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.87831184Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=749.818µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.88293652Z level=info msg="Executing migration" id="add index dashboard_permission" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.884178632Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.243922ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.888999485Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.889506769Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=507.224µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.899093975Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.899334117Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=240.532µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.902234422Z level=info msg="Executing migration" id="create tag table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.90298784Z level=info msg="Migration successfully executed" id="create tag table" duration=746.107µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.906211348Z level=info msg="Executing migration" id="add index tag.key_value" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.906897174Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=685.486µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.910834839Z level=info msg="Executing migration" id="create login attempt table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.911380034Z level=info msg="Migration successfully executed" id="create login attempt table" duration=545.115µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.917163916Z level=info msg="Executing migration" id="add index login_attempt.username" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.918129854Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=967.678µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.924589431Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.92552293Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=933.729µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.932367321Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.9479023Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.532569ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.952398559Z level=info msg="Executing migration" id="create login_attempt v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.953221876Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=824.047µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.960584862Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.961416589Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=832.467µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.9648061Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.965004362Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=198.012µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.967308923Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.967745466Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=436.123µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.971737862Z level=info msg="Executing migration" id="create user auth table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.972450608Z level=info msg="Migration successfully executed" id="create user auth table" duration=712.716µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.978023817Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.979574221Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.548864ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.983204514Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.983231384Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=27.82µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.988106108Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.993264113Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.157465ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:46:59.996432312Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.001475537Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.043115ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.004701507Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.010162129Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.460122ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.015693401Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.020799918Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.106518ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.025146808Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.026047846Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=900.608µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.029407418Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.039556072Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=10.145954ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.117930822Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.122361174Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=4.435082ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.1262982Z level=info msg="Executing migration" id="create server_lock table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.127488582Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.192152ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.130827003Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.131913162Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.083329ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.13492239Z level=info msg="Executing migration" id="create user auth token table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.13590804Z level=info msg="Migration successfully executed" id="create user auth token table" duration=985.26µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.141965497Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.143517011Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.551154ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.147405237Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.148907581Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.501804ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.152313724Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.153327973Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.01931ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.159577041Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.168371803Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.793792ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.171628873Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.17235355Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=723.037µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.184715315Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.192441418Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=7.726663ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.197858468Z level=info msg="Executing migration" id="create cache_data table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.198809677Z level=info msg="Migration successfully executed" id="create cache_data table" duration=950.839µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.201981946Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.203005146Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.02298ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.206167436Z level=info msg="Executing migration" id="create short_url table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.207125125Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=957.209µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.239407966Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.241068231Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.664615ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.245271581Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.245300132Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=29.901µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.250370539Z level=info msg="Executing migration" id="delete alert_definition table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.25045754Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=85.221µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.253258956Z level=info msg="Executing migration" id="recreate alert_definition table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.254239405Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=979.899µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.259404563Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.260970377Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.565204ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.264272609Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.266065355Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.793116ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.269270055Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.269288325Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=19µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.273947459Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.275439293Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.490704ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.278818984Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.280256807Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.437683ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.283181715Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.284207565Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.02537ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.289328382Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.290675615Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.346923ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.293473361Z level=info msg="Executing migration" id="Add column paused in alert_definition" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.29987108Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.396869ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.307204219Z level=info msg="Executing migration" id="drop alert_definition table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.308159738Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=957.179µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.313327346Z level=info msg="Executing migration" id="delete alert_definition_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.313511878Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=187.232µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.316653787Z level=info msg="Executing migration" id="recreate alert_definition_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.317397524Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=743.477µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.322879195Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.323662163Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=782.878µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.326774512Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.327779001Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.004259ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.330628868Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.330648018Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=20.24µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.358435707Z level=info msg="Executing migration" id="drop alert_definition_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.360174504Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.739957ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.36511537Z level=info msg="Executing migration" id="create alert_instance table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.366898336Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.782446ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.370234097Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.371361338Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.127041ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.37588368Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.37696348Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.10715ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.380258751Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.38656244Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.295359ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.389959831Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.391581707Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.621366ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.395733025Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.396718235Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=984.98µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.400749692Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.427771815Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.016923ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.430929984Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.453304293Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.374139ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.47657744Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.478143325Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.569935ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.481596007Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.483161992Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.565225ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.487142769Z level=info msg="Executing migration" id="add current_reason column related to current_state" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.494442858Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.294218ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.497719528Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.504497871Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.779633ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.507996224Z level=info msg="Executing migration" id="create alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.508893952Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=897.828µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.512527346Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.513407144Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=879.478µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.518945136Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.520924064Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.986669ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.525202595Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.526203673Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.000658ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.531917037Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.531949207Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=33.17µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.537783381Z level=info msg="Executing migration" id="add column for to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.545518584Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.715393ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.548870475Z level=info msg="Executing migration" id="add column annotations to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.555640659Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.769764ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.56011733Z level=info msg="Executing migration" id="add column labels to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.566906814Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.788474ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.570115914Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.57083646Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=720.086µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.583314946Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.585421976Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.10821ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.589380584Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.599290106Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.910832ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.603058701Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.609944736Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.885594ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.614518769Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.616297425Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.778406ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.620658006Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.627649851Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.990785ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.631947711Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.636343301Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.39454ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.641854193Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.641875153Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=20.53µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.647062512Z level=info msg="Executing migration" id="create alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.648076802Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.01419ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.6533477Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.654961736Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.613376ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.657835423Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.658819861Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=984.078µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.663959169Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.66398859Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=35.071µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.667367131Z level=info msg="Executing migration" id="add column for to alert_rule_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.6821476Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=14.775849ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.696684815Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.705583708Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=8.895833ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.709774448Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.716711262Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.906954ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.721459097Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.726308972Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.850305ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.735008713Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.739558976Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.549773ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.742422062Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.742452822Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=31.32µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.747372769Z level=info msg="Executing migration" id=create_alert_configuration_table 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.748226666Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=853.268µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.752729229Z level=info msg="Executing migration" id="Add column default in alert_configuration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.759075158Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.345249ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.762017235Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.762033765Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=17.43µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.767087893Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.771626865Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.538552ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.774496032Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.775536541Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.040039ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.778386528Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.784780257Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.392809ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.78930936Z level=info msg="Executing migration" id=create_ngalert_configuration_table 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.790307719Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=997.399µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.794755721Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.795833721Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.07773ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.814950239Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.824221626Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.272707ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.829786188Z level=info msg="Executing migration" id="create provenance_type table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.830757817Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=971.009µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.835020466Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.836084247Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.063511ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.840298466Z level=info msg="Executing migration" id="create alert_image table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.841165564Z level=info msg="Migration successfully executed" id="create alert_image table" duration=863.568µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.844600247Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.845641506Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.041049ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.849640883Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.849660833Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=20.74µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.852709063Z level=info msg="Executing migration" id=create_alert_configuration_history_table 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.853775502Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.0662ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.857085223Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.858111703Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.02627ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.862323392Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.862856207Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.865914275Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.8664316Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=516.635µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.86955134Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.870648759Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.097199ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.875617376Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.886858141Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=11.242035ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.8899865Z level=info msg="Executing migration" id="create library_element table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.890752947Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=765.997µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.896519221Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.897626232Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.106501ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.901415767Z level=info msg="Executing migration" id="create library_element_connection table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.902324355Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=908.058µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.919400144Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.921173661Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.772847ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.925054757Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.926822974Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.764127ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.931441867Z level=info msg="Executing migration" id="increase max description length to 2048" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.931468208Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=29.011µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.936671556Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.936689306Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=18.57µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.940044348Z level=info msg="Executing migration" id="add library_element folder uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.949519986Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=9.476138ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.954046888Z level=info msg="Executing migration" id="populate library_element folder_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.954510332Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=462.874µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.957683182Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.958807262Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.1236ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.962283575Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.962627689Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=343.513µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.965870229Z level=info msg="Executing migration" id="create data_keys table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.966959828Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.088619ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.971141858Z level=info msg="Executing migration" id="create secrets table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.972642631Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.499673ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:00.976071424Z level=info msg="Executing migration" id="rename data_keys name column to id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.010079163Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.004768ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.013345783Z level=info msg="Executing migration" id="add name column into data_keys" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.020528791Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.181858ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.030112602Z level=info msg="Executing migration" id="copy data_keys id column values into name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.030220403Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=108.151µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.042441249Z level=info msg="Executing migration" id="rename data_keys name column to label" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.079298179Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=36.85454ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.084137005Z level=info msg="Executing migration" id="rename data_keys id column back to name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.115896627Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.755322ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.120043267Z level=info msg="Executing migration" id="create kv_store table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.120859664Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=816.387µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.125275696Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.126097074Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=821.048µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.141777673Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.142289538Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=510.725µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.145898302Z level=info msg="Executing migration" id="create permission table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.147372376Z level=info msg="Migration successfully executed" id="create permission table" duration=1.470294ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.151879249Z level=info msg="Executing migration" id="add unique index permission.role_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.152933769Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.05524ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.155927527Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.156899606Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=971.809µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.160011656Z level=info msg="Executing migration" id="create role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.160880365Z level=info msg="Migration successfully executed" id="create role table" duration=868.279µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.165187155Z level=info msg="Executing migration" id="add column display_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.172492285Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.30547ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.175604345Z level=info msg="Executing migration" id="add column group_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.180851665Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.24665ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.183793683Z level=info msg="Executing migration" id="add index role.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.184517749Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=726.156µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.188610468Z level=info msg="Executing migration" id="add unique index role_org_id_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.189354625Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=743.647µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.193621036Z level=info msg="Executing migration" id="add index role_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.194610896Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=989.77µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.198145119Z level=info msg="Executing migration" id="create team role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.198967376Z level=info msg="Migration successfully executed" id="create team role table" duration=821.947µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.204839582Z level=info msg="Executing migration" id="add index team_role.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.205859892Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.01988ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.20879516Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.2098479Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.05234ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.212872979Z level=info msg="Executing migration" id="add index team_role.team_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.213890909Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.01744ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.218139279Z level=info msg="Executing migration" id="create user role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.218945896Z level=info msg="Migration successfully executed" id="create user role table" duration=806.867µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.222207407Z level=info msg="Executing migration" id="add index user_role.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.223278308Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.070401ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.22665622Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.22773584Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.07898ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.231751028Z level=info msg="Executing migration" id="add index user_role.user_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.232795808Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.04445ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.2362129Z level=info msg="Executing migration" id="create builtin role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.237054249Z level=info msg="Migration successfully executed" id="create builtin role table" duration=841.129µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.264162906Z level=info msg="Executing migration" id="add index builtin_role.role_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.265048355Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=887.009µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.269743689Z level=info msg="Executing migration" id="add index builtin_role.name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.270930521Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.186882ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.2751545Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.28451212Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.35654ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.289026552Z level=info msg="Executing migration" id="add index builtin_role.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.289995592Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=968.49µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.293333403Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.294360273Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.02382ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.299574163Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.300864675Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.286752ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.305114875Z level=info msg="Executing migration" id="add unique index role.uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.305971613Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=854.968µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.311052042Z level=info msg="Executing migration" id="create seed assignment table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.311849309Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=794.857µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.316637675Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.317740436Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.102481ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.322788433Z level=info msg="Executing migration" id="add column hidden to role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.333971999Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.182696ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.337538084Z level=info msg="Executing migration" id="permission kind migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.344716111Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.177897ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.350044322Z level=info msg="Executing migration" id="permission attribute migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.360650603Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=10.606311ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.365990474Z level=info msg="Executing migration" id="permission identifier migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.378744645Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=12.752871ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.382009967Z level=info msg="Executing migration" id="add permission identifier index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.383192617Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.18182ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.388546898Z level=info msg="Executing migration" id="add permission action scope role_id index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.389644428Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.09427ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.396320082Z level=info msg="Executing migration" id="remove permission role_id action scope index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.39719381Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=873.498µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.400621373Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.408406327Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=7.784084ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.413142322Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.414417364Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.274792ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.418985158Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.420443941Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.458553ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.424176827Z level=info msg="Executing migration" id="create query_history table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.425326808Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.149701ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.430366846Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.431232214Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=865.188µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.435379104Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.435428894Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=50.36µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.438761906Z level=info msg="Executing migration" id="create query_history_details table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.439707414Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=944.408µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.444319099Z level=info msg="Executing migration" id="rbac disabled migrator" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.44450653Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=192.791µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.448379637Z level=info msg="Executing migration" id="teams permissions migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.449200705Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=821.398µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.452677768Z level=info msg="Executing migration" id="dashboard permissions" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.453301904Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=631.816µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.456650705Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.457346592Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=695.657µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.461632653Z level=info msg="Executing migration" id="drop managed folder create actions" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.462108117Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=475.294µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.467339807Z level=info msg="Executing migration" id="alerting notification permissions" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.468159974Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=819.967µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.471640118Z level=info msg="Executing migration" id="create query_history_star table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.472475486Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=834.878µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.476618665Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.477827327Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.207962ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.482617742Z level=info msg="Executing migration" id="add column org_id in query_history_star" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.494358804Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.740732ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.498062419Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.49819852Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=111.621µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.501796684Z level=info msg="Executing migration" id="create correlation table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.502881755Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.087241ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.508257786Z level=info msg="Executing migration" id="add index correlations.uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.509399877Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.142011ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.51386156Z level=info msg="Executing migration" id="add index correlations.source_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.515452314Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.586264ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.520590163Z level=info msg="Executing migration" id="add correlation config column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.529087064Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.492961ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.533416076Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.534423365Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.007109ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.53913758Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.5402554Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.1176ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.545243668Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.574654807Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=29.406409ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.581171249Z level=info msg="Executing migration" id="create correlation v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.582109748Z level=info msg="Migration successfully executed" id="create correlation v2" duration=935.699µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.588550549Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.589339767Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=789.508µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.592297275Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.593330165Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.03286ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.598703625Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.599794485Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.09082ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.604365099Z level=info msg="Executing migration" id="copy correlation v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.604540101Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=175.082µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.608425178Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.608987884Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=562.466µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.614365814Z level=info msg="Executing migration" id="add provisioning column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.620418142Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.051758ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.62329428Z level=info msg="Executing migration" id="add type column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.631624429Z level=info msg="Migration successfully executed" id="add type column" duration=8.329049ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.637439794Z level=info msg="Executing migration" id="create entity_events table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.638337472Z level=info msg="Migration successfully executed" id="create entity_events table" duration=900.198µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.6454946Z level=info msg="Executing migration" id="create dashboard public config v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.64651206Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.01722ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.650611198Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.651089874Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.655034071Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.655483505Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.65921913Z level=info msg="Executing migration" id="Drop old dashboard public config table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.659797816Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=581.266µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.664597931Z level=info msg="Executing migration" id="recreate dashboard public config v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.665344689Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=746.588µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.668972824Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.669785751Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=812.797µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.676888039Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.678040209Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.15462ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.682156449Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.683196559Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.03993ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.688811742Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.689844071Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.032249ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.694150453Z level=info msg="Executing migration" id="Drop public config table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.69490807Z level=info msg="Migration successfully executed" id="Drop public config table" duration=757.007µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.698013679Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.699227991Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.211532ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.705025366Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.706200357Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.174631ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.709449348Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.711341466Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.891758ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.715692408Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.717573265Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.880707ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.722608793Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.746014605Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.404002ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.750421807Z level=info msg="Executing migration" id="add annotations_enabled column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.756853288Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.431131ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.760237941Z level=info msg="Executing migration" id="add time_selection_enabled column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.772056623Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=11.819332ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.776559385Z level=info msg="Executing migration" id="delete orphaned public dashboards" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.776769947Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=208.872µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.781285221Z level=info msg="Executing migration" id="add share column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.789569009Z level=info msg="Migration successfully executed" id="add share column" duration=8.282838ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.807150206Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.807575Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=424.794µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.813922331Z level=info msg="Executing migration" id="create file table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.815528556Z level=info msg="Migration successfully executed" id="create file table" duration=1.605535ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.820427633Z level=info msg="Executing migration" id="file table idx: path natural pk" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.821501233Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.073129ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.8244437Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.825570412Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.126202ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.831809501Z level=info msg="Executing migration" id="create file_meta table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.833242345Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.432504ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.836812388Z level=info msg="Executing migration" id="file table idx: path key" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.838170791Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.358043ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.841095989Z level=info msg="Executing migration" id="set path collation in file table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.841109319Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=13.8µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.84329436Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.84331036Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=16.72µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.848298528Z level=info msg="Executing migration" id="managed permissions migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.848801212Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=500.614µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.852053073Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.852253825Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=200.692µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.855212963Z level=info msg="Executing migration" id="RBAC action name migrator" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.857239962Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.026259ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.86112814Z level=info msg="Executing migration" id="Add UID column to playlist" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.870266817Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.139677ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.875156063Z level=info msg="Executing migration" id="Update uid column values in playlist" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.875303175Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=146.872µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.878652466Z level=info msg="Executing migration" id="Add index for uid in playlist" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.88119415Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.541334ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.885695013Z level=info msg="Executing migration" id="update group index for alert rules" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.886163707Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=468.794µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.890746171Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.890950903Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=204.212µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.896473345Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.897409345Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=937.79µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.901495343Z level=info msg="Executing migration" id="add action column to seed_assignment" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.912380577Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.885874ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.915750119Z level=info msg="Executing migration" id="add scope column to seed_assignment" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.924640893Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.892234ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.927745202Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.928635171Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=889.019µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:01.932862661Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.007143848Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=74.281097ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.013092564Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.013945152Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=851.878µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.01896015Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.02004824Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.08747ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.023356461Z level=info msg="Executing migration" id="add primary key to seed_assigment" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.05165852Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.301109ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.058743287Z level=info msg="Executing migration" id="add origin column to seed_assignment" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.068226487Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.48272ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.071712121Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.072017264Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=304.963µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.075164974Z level=info msg="Executing migration" id="prevent seeding OnCall access" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.075328465Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=159.731µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.078310003Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.078553685Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=241.492µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.084416631Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.084706234Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=291.903µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.089452789Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.0895999Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=147.091µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.095522107Z level=info msg="Executing migration" id="create folder table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.096652718Z level=info msg="Migration successfully executed" id="create folder table" duration=1.130091ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.100425453Z level=info msg="Executing migration" id="Add index for parent_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.102355862Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.925929ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.106656562Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.10850914Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.852708ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.114170934Z level=info msg="Executing migration" id="Update folder title length" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.114199724Z level=info msg="Migration successfully executed" id="Update folder title length" duration=29.72µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.121025009Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.122931937Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.906478ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.128807653Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.130003225Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.195362ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.134495757Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.136093573Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.596295ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.140328032Z level=info msg="Executing migration" id="Sync dashboard and folder table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.141138911Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=809.779µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.144648834Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.145009257Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=359.593µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.148450819Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.149633801Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.182482ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.154276315Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.156346754Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.070129ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.160397283Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.16220694Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.810257ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.167025636Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.168307778Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.281992ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.172802461Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.174204154Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.401013ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.177638717Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.178735597Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.09661ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.18317444Z level=info msg="Executing migration" id="create anon_device table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.184043668Z level=info msg="Migration successfully executed" id="create anon_device table" duration=869.068µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.187884434Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.18954631Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.659156ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.193142594Z level=info msg="Executing migration" id="add index anon_device.updated_at" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.195032212Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.892158ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.199380163Z level=info msg="Executing migration" id="create signing_key table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.200217522Z level=info msg="Migration successfully executed" id="create signing_key table" duration=836.989µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.203460652Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.204537172Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.11691ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.210991933Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.212119914Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.127771ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.217297954Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.21793629Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=635.525µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.229954874Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.237518696Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.566142ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.246000496Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.246613622Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=613.576µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.250820402Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.250933783Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=111.621µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.254249745Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.255148613Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=898.528µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.260609855Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.260648905Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=39.46µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.265874705Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.266858244Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=967.239µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.270185636Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.271125585Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=939.579µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.274363595Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.275261794Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=897.999µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.280235781Z level=info msg="Executing migration" id="create sso_setting table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.281086999Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=850.848µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.284546453Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.285183178Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=636.725µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.288338018Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.288628651Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=291.073µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.292839872Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.293405537Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=565.145µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.296690268Z level=info msg="Executing migration" id="create cloud_migration table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.297898339Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.205831ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.304547113Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.306235978Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.689825ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.311398767Z level=info msg="Executing migration" id="add stack_id column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.321460153Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.061026ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.324913486Z level=info msg="Executing migration" id="add region_slug column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.332150295Z level=info msg="Migration successfully executed" id="add region_slug column" duration=7.236259ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.335599388Z level=info msg="Executing migration" id="add cluster_slug column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.345200888Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.60078ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.358738217Z level=info msg="Executing migration" id="add migration uid column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.371206126Z level=info msg="Migration successfully executed" id="add migration uid column" duration=12.468479ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.377933869Z level=info msg="Executing migration" id="Update uid column values for migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.378267752Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=333.603µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.386036287Z level=info msg="Executing migration" id="Add unique index migration_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.388309498Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.272831ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.396029241Z level=info msg="Executing migration" id="add migration run uid column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.405904745Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.874794ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.416590647Z level=info msg="Executing migration" id="Update uid column values for migration run" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.416890889Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=300.152µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.428583021Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.43058983Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.006329ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.448815993Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.475673098Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=26.859705ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.487133607Z level=info msg="Executing migration" id="create cloud_migration_session v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.488375928Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.241741ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.507226537Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.509280017Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.05373ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.517589516Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.51803186Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=442.144µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.524084798Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.525060108Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=975.2µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.54433428Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.573272805Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=28.939575ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.580274211Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.581877227Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.610616ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.599668776Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.601955618Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=2.286822ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.608191587Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.608647691Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=456.464µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.636504205Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.637712397Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.209632ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.657425085Z level=info msg="Executing migration" id="add snapshot upload_url column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.668351699Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.925554ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.672985342Z level=info msg="Executing migration" id="add snapshot status column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.6801496Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.164238ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.689136256Z level=info msg="Executing migration" id="add snapshot local_directory column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.695938991Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=6.802215ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.7105575Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.717227952Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=6.670022ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.728914953Z level=info msg="Executing migration" id="add snapshot encryption_key column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.741030069Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=12.116786ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.783494922Z level=info msg="Executing migration" id="add snapshot error_string column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.796418584Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=12.925062ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.829618Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.831080785Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.462694ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.843696224Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.880752987Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.055893ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.889824662Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.900676906Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=10.850904ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.910349667Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.92438685Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=14.039393ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.949370868Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.962600974Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=13.225756ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.980561844Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:02.996039302Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=15.478238ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.006800773Z level=info msg="Executing migration" id="increase resource_uid column length" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.006823343Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=23.64µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.011821879Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.011878519Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=60.03µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.02480355Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.037050462Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.247882ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.042282961Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.052662627Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=10.379116ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.07459266Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.075201205Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=608.225µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.123959315Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.124331728Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=367.113µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.143174892Z level=info msg="Executing migration" id="add record column to alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.156805118Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=13.630716ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.162366599Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.17210486Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.736861ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.192973962Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.20571534Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=12.733408ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.211597934Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.224640494Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=13.04287ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.235757386Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.236399232Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=641.206µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.241133576Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.25452732Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=13.393244ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.261715756Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.271439236Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.7235ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.280520119Z level=info msg="Executing migration" id="delete orphaned service account permissions" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.280867312Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=349.513µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.288661495Z level=info msg="Executing migration" id="adding action set permissions" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.289390151Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=729.706µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.296065413Z level=info msg="Executing migration" id="create user_external_session table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.297297884Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.232301ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.303175318Z level=info msg="Executing migration" id="increase name_id column length to 1024" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.303214728Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=40.99µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.309488397Z level=info msg="Executing migration" id="increase session_id column length to 1024" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.309535978Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=37.711µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.319094225Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.319681321Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=587.196µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.328786384Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.339993018Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.206674ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.347498787Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.363187513Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=15.690356ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.369553221Z level=info msg="Executing migration" id="add alert_rule_state table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.37054729Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=976.439µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.38468679Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.386575868Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.888538ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.394580702Z level=info msg="Executing migration" id="add guid column to alert_rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.407702194Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=13.121611ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.417625164Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.427601687Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=9.975913ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.44962158Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.449660541Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.449993324Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.450017544Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=397.164µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.457847557Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.458732194Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=884.108µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.470124889Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.471992387Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.867158ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.490481977Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.492559156Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.076679ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.498489731Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.503048223Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=4.557772ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.524568901Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.526428479Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.859488ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.53737015Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.552003635Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=14.634705ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.561173759Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.572211241Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=11.069512ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.577918324Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.588448241Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=10.524027ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.600427601Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.614655263Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=14.226572ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.62739823Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.627677843Z level=info msg="Removed 0 datasources:drilldown permissions" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.627698463Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=301.373µs 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.631781221Z level=info msg="Executing migration" id="remove title in folder unique index" 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.633037852Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.257051ms 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.645481548Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.66199057s 11:53:07 grafana | logger=migrator t=2025-06-17T11:47:03.646330155Z level=info msg="Unlocking database" 11:53:07 grafana | logger=sqlstore t=2025-06-17T11:47:03.664860276Z level=info msg="Created default admin" user=admin 11:53:07 grafana | logger=sqlstore t=2025-06-17T11:47:03.665158719Z level=info msg="Created default organization" 11:53:07 grafana | logger=secrets t=2025-06-17T11:47:03.680369239Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 11:53:07 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-17T11:47:03.770266809Z level=info msg="Restored cache from database" duration=416.854µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.778657476Z level=info msg="Locking database" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.778691387Z level=info msg="Starting DB migrations" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.786359008Z level=info msg="Executing migration" id="create resource_migration_log table" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.787155845Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=797.287µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.803201493Z level=info msg="Executing migration" id="Initialize resource tables" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.803239683Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=40.04µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.811402269Z level=info msg="Executing migration" id="drop table resource" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.81155187Z level=info msg="Migration successfully executed" id="drop table resource" duration=150.951µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.822709953Z level=info msg="Executing migration" id="create table resource" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.824079035Z level=info msg="Migration successfully executed" id="create table resource" duration=1.370952ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.834251599Z level=info msg="Executing migration" id="create table resource, index: 0" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.836132447Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.882198ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.844618895Z level=info msg="Executing migration" id="drop table resource_history" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.844769777Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=154.342µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.855107532Z level=info msg="Executing migration" id="create table resource_history" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.856266383Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.159891ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.860470051Z level=info msg="Executing migration" id="create table resource_history, index: 0" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.861742093Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.272492ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.872578273Z level=info msg="Executing migration" id="create table resource_history, index: 1" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.874197078Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.622965ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.879266545Z level=info msg="Executing migration" id="drop table resource_version" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.879391666Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=125.631µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.883683266Z level=info msg="Executing migration" id="create table resource_version" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.884973667Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.291211ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.890310507Z level=info msg="Executing migration" id="create table resource_version, index: 0" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.891915611Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.604514ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.899456161Z level=info msg="Executing migration" id="drop table resource_blob" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.899552122Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=96.721µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.904917021Z level=info msg="Executing migration" id="create table resource_blob" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.906109953Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.192502ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.912858605Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.914199487Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.343152ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.929403227Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.930677449Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.276042ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.937416351Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.948187751Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=10.76905ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.957417516Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.967247886Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.82778ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.972539415Z level=info msg="Executing migration" id="Add index to resource_history for polling" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.973981709Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.441774ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.985300143Z level=info msg="Executing migration" id="Add index to resource for loading" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.987460333Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=2.15957ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:03.99365992Z level=info msg="Executing migration" id="Add column folder in resource_history" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.005157716Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=11.494046ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.009194493Z level=info msg="Executing migration" id="Add column folder in resource" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.016916825Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=7.721522ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.027501663Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 11:53:07 grafana | logger=deletion-marker-migrator t=2025-06-17T11:47:04.027540833Z level=info msg="finding any deletion markers" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.028269069Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=768.027µs 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.032503139Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.034673798Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.164709ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.039585964Z level=info msg="Executing migration" id="Add generation to resource history" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.051074129Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.483215ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.057798752Z level=info msg="Executing migration" id="Add generation index to resource history" 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.060039953Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.241361ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.065617504Z level=info msg="migrations completed" performed=26 skipped=0 duration=279.312667ms 11:53:07 grafana | logger=resource-migrator t=2025-06-17T11:47:04.066784535Z level=info msg="Unlocking database" 11:53:07 grafana | t=2025-06-17T11:47:04.067439841Z level=info caller=logger.go:214 time=2025-06-17T11:47:04.06741254Z msg="Using channel notifier" logger=sql-resource-server 11:53:07 grafana | logger=plugin.store t=2025-06-17T11:47:04.081905804Z level=info msg="Loading plugins..." 11:53:07 grafana | logger=plugins.registration t=2025-06-17T11:47:04.131037108Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 11:53:07 grafana | logger=plugins.initialization t=2025-06-17T11:47:04.131063378Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 11:53:07 grafana | logger=plugin.store t=2025-06-17T11:47:04.131130678Z level=info msg="Plugins loaded" count=53 duration=49.225994ms 11:53:07 grafana | logger=query_data t=2025-06-17T11:47:04.135853722Z level=info msg="Query Service initialization" 11:53:07 grafana | logger=live.push_http t=2025-06-17T11:47:04.140378243Z level=info msg="Live Push Gateway initialization" 11:53:07 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-17T11:47:04.157582712Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 11:53:07 grafana | logger=ngalert t=2025-06-17T11:47:04.183674473Z level=info msg="Using simple database alert instance store" 11:53:07 grafana | logger=ngalert.state.manager.persist t=2025-06-17T11:47:04.183708993Z level=info msg="Using sync state persister" 11:53:07 grafana | logger=infra.usagestats.collector t=2025-06-17T11:47:04.187028264Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 11:53:07 grafana | logger=grafanaStorageLogger t=2025-06-17T11:47:04.187306567Z level=info msg="Storage starting" 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:04.187405347Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 11:53:07 grafana | logger=http.server t=2025-06-17T11:47:04.19093578Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 11:53:07 grafana | logger=ngalert.state.manager t=2025-06-17T11:47:04.19101774Z level=info msg="Warming state cache for startup" 11:53:07 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-17T11:47:04.19313776Z level=info msg="Starting MultiOrg Alertmanager" 11:53:07 grafana | logger=sqlstore.transactions t=2025-06-17T11:47:04.198416549Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 11:53:07 grafana | logger=provisioning.datasources t=2025-06-17T11:47:04.25496627Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 11:53:07 grafana | logger=sqlstore.transactions t=2025-06-17T11:47:04.277581899Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 11:53:07 grafana | logger=grafana.update.checker t=2025-06-17T11:47:04.282735747Z level=info msg="Update check succeeded" duration=89.90903ms 11:53:07 grafana | logger=plugins.update.checker t=2025-06-17T11:47:04.283734256Z level=info msg="Update check succeeded" duration=90.931618ms 11:53:07 grafana | logger=provisioning.alerting t=2025-06-17T11:47:04.40352558Z level=info msg="starting to provision alerting" 11:53:07 grafana | logger=provisioning.alerting t=2025-06-17T11:47:04.403561441Z level=info msg="finished to provision alerting" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.405274027Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 11:53:07 grafana | logger=provisioning.dashboard t=2025-06-17T11:47:04.40680994Z level=info msg="starting to provision dashboards" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.406823281Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.407355525Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.407863321Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.410228433Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.410738968Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.411209922Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.411674776Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 11:53:07 grafana | logger=grafana-apiserver t=2025-06-17T11:47:04.412562394Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 11:53:07 grafana | logger=ngalert.state.manager t=2025-06-17T11:47:04.415573231Z level=info msg="State cache has been initialized" states=0 duration=224.554451ms 11:53:07 grafana | logger=ngalert.scheduler t=2025-06-17T11:47:04.415615352Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 11:53:07 grafana | logger=ticker t=2025-06-17T11:47:04.415668092Z level=info msg=starting first_tick=2025-06-17T11:47:10Z 11:53:07 grafana | logger=app-registry t=2025-06-17T11:47:04.46522148Z level=info msg="app registry initialized" 11:53:07 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-17T11:47:04.480200297Z level=info msg="Patterns update finished" duration=119.208929ms 11:53:07 grafana | logger=plugin.installer t=2025-06-17T11:47:04.81645408Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 11:53:07 grafana | logger=installer.fs t=2025-06-17T11:47:04.956469451Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 11:53:07 grafana | logger=plugins.registration t=2025-06-17T11:47:05.034226338Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:05.034326609Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=846.865881ms 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:05.034410809Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 11:53:07 grafana | logger=plugin.installer t=2025-06-17T11:47:05.282482827Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 11:53:07 grafana | logger=installer.fs t=2025-06-17T11:47:05.338768396Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 11:53:07 grafana | logger=plugins.registration t=2025-06-17T11:47:05.380781263Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:05.380818493Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=346.306673ms 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:05.380844903Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 11:53:07 grafana | logger=provisioning.dashboard t=2025-06-17T11:47:05.383992802Z level=info msg="finished to provision dashboards" 11:53:07 grafana | logger=plugin.installer t=2025-06-17T11:47:05.703621349Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 11:53:07 grafana | logger=installer.fs t=2025-06-17T11:47:05.762919565Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 11:53:07 grafana | logger=plugins.registration t=2025-06-17T11:47:05.778709821Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:05.778729762Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=397.880609ms 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:05.778750122Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 11:53:07 grafana | logger=plugin.installer t=2025-06-17T11:47:06.214958522Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 11:53:07 grafana | logger=installer.fs t=2025-06-17T11:47:06.28315331Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 11:53:07 grafana | logger=plugins.registration t=2025-06-17T11:47:06.302965073Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 11:53:07 grafana | logger=plugin.backgroundinstaller t=2025-06-17T11:47:06.302986783Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=524.232361ms 11:53:07 grafana | logger=infra.usagestats t=2025-06-17T11:48:43.197585509Z level=info msg="Usage stats are ready to report" 11:53:07 kafka | ===> User 11:53:07 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:53:07 kafka | ===> Configuring ... 11:53:07 kafka | Running in Zookeeper mode... 11:53:07 kafka | ===> Running preflight checks ... 11:53:07 kafka | ===> Check if /var/lib/kafka/data is writable ... 11:53:07 kafka | ===> Check if Zookeeper is healthy ... 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,089] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,090] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,092] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,095] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:53:07 kafka | [2025-06-17 11:46:59,099] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:53:07 kafka | [2025-06-17 11:46:59,105] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:46:59,119] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:46:59,120] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:46:59,130] INFO Socket connection established, initiating session, client: /172.17.0.5:38442, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:46:59,154] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000023f570000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:46:59,278] INFO Session: 0x10000023f570000 closed (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:46:59,278] INFO EventThread shut down for session: 0x10000023f570000 (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | Using log4j config /etc/kafka/log4j.properties 11:53:07 kafka | ===> Launching ... 11:53:07 kafka | ===> Launching kafka ... 11:53:07 kafka | [2025-06-17 11:46:59,953] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 11:53:07 kafka | [2025-06-17 11:47:00,218] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:53:07 kafka | [2025-06-17 11:47:00,289] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 11:53:07 kafka | [2025-06-17 11:47:00,290] INFO starting (kafka.server.KafkaServer) 11:53:07 kafka | [2025-06-17 11:47:00,291] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 11:53:07 kafka | [2025-06-17 11:47:00,302] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,306] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,307] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,307] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,308] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 11:53:07 kafka | [2025-06-17 11:47:00,312] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:53:07 kafka | [2025-06-17 11:47:00,317] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:47:00,318] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 11:53:07 kafka | [2025-06-17 11:47:00,325] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:47:00,332] INFO Socket connection established, initiating session, client: /172.17.0.5:38444, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:47:00,366] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000023f570001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 11:53:07 kafka | [2025-06-17 11:47:00,371] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 11:53:07 kafka | [2025-06-17 11:47:00,695] INFO Cluster ID = ZaVd10B5QzSHTyht7yX6_w (kafka.server.KafkaServer) 11:53:07 kafka | [2025-06-17 11:47:00,699] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 11:53:07 kafka | [2025-06-17 11:47:00,743] INFO KafkaConfig values: 11:53:07 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 11:53:07 kafka | alter.config.policy.class.name = null 11:53:07 kafka | alter.log.dirs.replication.quota.window.num = 11 11:53:07 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 11:53:07 kafka | authorizer.class.name = 11:53:07 kafka | auto.create.topics.enable = true 11:53:07 kafka | auto.include.jmx.reporter = true 11:53:07 kafka | auto.leader.rebalance.enable = true 11:53:07 kafka | background.threads = 10 11:53:07 kafka | broker.heartbeat.interval.ms = 2000 11:53:07 kafka | broker.id = 1 11:53:07 kafka | broker.id.generation.enable = true 11:53:07 kafka | broker.rack = null 11:53:07 kafka | broker.session.timeout.ms = 9000 11:53:07 kafka | client.quota.callback.class = null 11:53:07 kafka | compression.type = producer 11:53:07 kafka | connection.failed.authentication.delay.ms = 100 11:53:07 kafka | connections.max.idle.ms = 600000 11:53:07 kafka | connections.max.reauth.ms = 0 11:53:07 kafka | control.plane.listener.name = null 11:53:07 kafka | controlled.shutdown.enable = true 11:53:07 kafka | controlled.shutdown.max.retries = 3 11:53:07 kafka | controlled.shutdown.retry.backoff.ms = 5000 11:53:07 kafka | controller.listener.names = null 11:53:07 kafka | controller.quorum.append.linger.ms = 25 11:53:07 kafka | controller.quorum.election.backoff.max.ms = 1000 11:53:07 kafka | controller.quorum.election.timeout.ms = 1000 11:53:07 kafka | controller.quorum.fetch.timeout.ms = 2000 11:53:07 kafka | controller.quorum.request.timeout.ms = 2000 11:53:07 kafka | controller.quorum.retry.backoff.ms = 20 11:53:07 kafka | controller.quorum.voters = [] 11:53:07 kafka | controller.quota.window.num = 11 11:53:07 kafka | controller.quota.window.size.seconds = 1 11:53:07 kafka | controller.socket.timeout.ms = 30000 11:53:07 kafka | create.topic.policy.class.name = null 11:53:07 kafka | default.replication.factor = 1 11:53:07 kafka | delegation.token.expiry.check.interval.ms = 3600000 11:53:07 kafka | delegation.token.expiry.time.ms = 86400000 11:53:07 kafka | delegation.token.master.key = null 11:53:07 kafka | delegation.token.max.lifetime.ms = 604800000 11:53:07 kafka | delegation.token.secret.key = null 11:53:07 kafka | delete.records.purgatory.purge.interval.requests = 1 11:53:07 kafka | delete.topic.enable = true 11:53:07 kafka | early.start.listeners = null 11:53:07 kafka | fetch.max.bytes = 57671680 11:53:07 kafka | fetch.purgatory.purge.interval.requests = 1000 11:53:07 kafka | group.initial.rebalance.delay.ms = 3000 11:53:07 kafka | group.max.session.timeout.ms = 1800000 11:53:07 kafka | group.max.size = 2147483647 11:53:07 kafka | group.min.session.timeout.ms = 6000 11:53:07 kafka | initial.broker.registration.timeout.ms = 60000 11:53:07 kafka | inter.broker.listener.name = PLAINTEXT 11:53:07 kafka | inter.broker.protocol.version = 3.4-IV0 11:53:07 kafka | kafka.metrics.polling.interval.secs = 10 11:53:07 kafka | kafka.metrics.reporters = [] 11:53:07 kafka | leader.imbalance.check.interval.seconds = 300 11:53:07 kafka | leader.imbalance.per.broker.percentage = 10 11:53:07 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 11:53:07 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 11:53:07 kafka | log.cleaner.backoff.ms = 15000 11:53:07 kafka | log.cleaner.dedupe.buffer.size = 134217728 11:53:07 kafka | log.cleaner.delete.retention.ms = 86400000 11:53:07 kafka | log.cleaner.enable = true 11:53:07 kafka | log.cleaner.io.buffer.load.factor = 0.9 11:53:07 kafka | log.cleaner.io.buffer.size = 524288 11:53:07 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 11:53:07 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 11:53:07 kafka | log.cleaner.min.cleanable.ratio = 0.5 11:53:07 kafka | log.cleaner.min.compaction.lag.ms = 0 11:53:07 kafka | log.cleaner.threads = 1 11:53:07 kafka | log.cleanup.policy = [delete] 11:53:07 kafka | log.dir = /tmp/kafka-logs 11:53:07 kafka | log.dirs = /var/lib/kafka/data 11:53:07 kafka | log.flush.interval.messages = 9223372036854775807 11:53:07 kafka | log.flush.interval.ms = null 11:53:07 kafka | log.flush.offset.checkpoint.interval.ms = 60000 11:53:07 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 11:53:07 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 11:53:07 kafka | log.index.interval.bytes = 4096 11:53:07 kafka | log.index.size.max.bytes = 10485760 11:53:07 kafka | log.message.downconversion.enable = true 11:53:07 kafka | log.message.format.version = 3.0-IV1 11:53:07 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 11:53:07 kafka | log.message.timestamp.type = CreateTime 11:53:07 kafka | log.preallocate = false 11:53:07 kafka | log.retention.bytes = -1 11:53:07 kafka | log.retention.check.interval.ms = 300000 11:53:07 kafka | log.retention.hours = 168 11:53:07 kafka | log.retention.minutes = null 11:53:07 kafka | log.retention.ms = null 11:53:07 kafka | log.roll.hours = 168 11:53:07 kafka | log.roll.jitter.hours = 0 11:53:07 kafka | log.roll.jitter.ms = null 11:53:07 kafka | log.roll.ms = null 11:53:07 kafka | log.segment.bytes = 1073741824 11:53:07 kafka | log.segment.delete.delay.ms = 60000 11:53:07 kafka | max.connection.creation.rate = 2147483647 11:53:07 kafka | max.connections = 2147483647 11:53:07 kafka | max.connections.per.ip = 2147483647 11:53:07 kafka | max.connections.per.ip.overrides = 11:53:07 kafka | max.incremental.fetch.session.cache.slots = 1000 11:53:07 kafka | message.max.bytes = 1048588 11:53:07 kafka | metadata.log.dir = null 11:53:07 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 11:53:07 kafka | metadata.log.max.snapshot.interval.ms = 3600000 11:53:07 kafka | metadata.log.segment.bytes = 1073741824 11:53:07 kafka | metadata.log.segment.min.bytes = 8388608 11:53:07 kafka | metadata.log.segment.ms = 604800000 11:53:07 kafka | metadata.max.idle.interval.ms = 500 11:53:07 kafka | metadata.max.retention.bytes = 104857600 11:53:07 kafka | metadata.max.retention.ms = 604800000 11:53:07 kafka | metric.reporters = [] 11:53:07 kafka | metrics.num.samples = 2 11:53:07 kafka | metrics.recording.level = INFO 11:53:07 kafka | metrics.sample.window.ms = 30000 11:53:07 kafka | min.insync.replicas = 1 11:53:07 kafka | node.id = 1 11:53:07 kafka | num.io.threads = 8 11:53:07 kafka | num.network.threads = 3 11:53:07 kafka | num.partitions = 1 11:53:07 kafka | num.recovery.threads.per.data.dir = 1 11:53:07 kafka | num.replica.alter.log.dirs.threads = null 11:53:07 kafka | num.replica.fetchers = 1 11:53:07 kafka | offset.metadata.max.bytes = 4096 11:53:07 kafka | offsets.commit.required.acks = -1 11:53:07 kafka | offsets.commit.timeout.ms = 5000 11:53:07 kafka | offsets.load.buffer.size = 5242880 11:53:07 kafka | offsets.retention.check.interval.ms = 600000 11:53:07 kafka | offsets.retention.minutes = 10080 11:53:07 kafka | offsets.topic.compression.codec = 0 11:53:07 kafka | offsets.topic.num.partitions = 50 11:53:07 kafka | offsets.topic.replication.factor = 1 11:53:07 kafka | offsets.topic.segment.bytes = 104857600 11:53:07 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 11:53:07 kafka | password.encoder.iterations = 4096 11:53:07 kafka | password.encoder.key.length = 128 11:53:07 kafka | password.encoder.keyfactory.algorithm = null 11:53:07 kafka | password.encoder.old.secret = null 11:53:07 kafka | password.encoder.secret = null 11:53:07 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 11:53:07 kafka | process.roles = [] 11:53:07 kafka | producer.id.expiration.check.interval.ms = 600000 11:53:07 kafka | producer.id.expiration.ms = 86400000 11:53:07 kafka | producer.purgatory.purge.interval.requests = 1000 11:53:07 kafka | queued.max.request.bytes = -1 11:53:07 kafka | queued.max.requests = 500 11:53:07 kafka | quota.window.num = 11 11:53:07 kafka | quota.window.size.seconds = 1 11:53:07 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 11:53:07 kafka | remote.log.manager.task.interval.ms = 30000 11:53:07 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 11:53:07 kafka | remote.log.manager.task.retry.backoff.ms = 500 11:53:07 kafka | remote.log.manager.task.retry.jitter = 0.2 11:53:07 kafka | remote.log.manager.thread.pool.size = 10 11:53:07 kafka | remote.log.metadata.manager.class.name = null 11:53:07 kafka | remote.log.metadata.manager.class.path = null 11:53:07 kafka | remote.log.metadata.manager.impl.prefix = null 11:53:07 kafka | remote.log.metadata.manager.listener.name = null 11:53:07 kafka | remote.log.reader.max.pending.tasks = 100 11:53:07 kafka | remote.log.reader.threads = 10 11:53:07 kafka | remote.log.storage.manager.class.name = null 11:53:07 kafka | remote.log.storage.manager.class.path = null 11:53:07 kafka | remote.log.storage.manager.impl.prefix = null 11:53:07 kafka | remote.log.storage.system.enable = false 11:53:07 kafka | replica.fetch.backoff.ms = 1000 11:53:07 kafka | replica.fetch.max.bytes = 1048576 11:53:07 kafka | replica.fetch.min.bytes = 1 11:53:07 kafka | replica.fetch.response.max.bytes = 10485760 11:53:07 kafka | replica.fetch.wait.max.ms = 500 11:53:07 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 11:53:07 kafka | replica.lag.time.max.ms = 30000 11:53:07 kafka | replica.selector.class = null 11:53:07 kafka | replica.socket.receive.buffer.bytes = 65536 11:53:07 kafka | replica.socket.timeout.ms = 30000 11:53:07 kafka | replication.quota.window.num = 11 11:53:07 kafka | replication.quota.window.size.seconds = 1 11:53:07 kafka | request.timeout.ms = 30000 11:53:07 kafka | reserved.broker.max.id = 1000 11:53:07 kafka | sasl.client.callback.handler.class = null 11:53:07 kafka | sasl.enabled.mechanisms = [GSSAPI] 11:53:07 kafka | sasl.jaas.config = null 11:53:07 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:07 kafka | sasl.kerberos.min.time.before.relogin = 60000 11:53:07 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 11:53:07 kafka | sasl.kerberos.service.name = null 11:53:07 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:07 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:07 kafka | sasl.login.callback.handler.class = null 11:53:07 kafka | sasl.login.class = null 11:53:07 kafka | sasl.login.connect.timeout.ms = null 11:53:07 kafka | sasl.login.read.timeout.ms = null 11:53:07 kafka | sasl.login.refresh.buffer.seconds = 300 11:53:07 kafka | sasl.login.refresh.min.period.seconds = 60 11:53:07 kafka | sasl.login.refresh.window.factor = 0.8 11:53:07 kafka | sasl.login.refresh.window.jitter = 0.05 11:53:07 kafka | sasl.login.retry.backoff.max.ms = 10000 11:53:07 kafka | sasl.login.retry.backoff.ms = 100 11:53:07 kafka | sasl.mechanism.controller.protocol = GSSAPI 11:53:07 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 11:53:07 kafka | sasl.oauthbearer.clock.skew.seconds = 30 11:53:07 kafka | sasl.oauthbearer.expected.audience = null 11:53:07 kafka | sasl.oauthbearer.expected.issuer = null 11:53:07 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:07 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:07 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:07 kafka | sasl.oauthbearer.jwks.endpoint.url = null 11:53:07 kafka | sasl.oauthbearer.scope.claim.name = scope 11:53:07 kafka | sasl.oauthbearer.sub.claim.name = sub 11:53:07 kafka | sasl.oauthbearer.token.endpoint.url = null 11:53:07 kafka | sasl.server.callback.handler.class = null 11:53:07 kafka | sasl.server.max.receive.size = 524288 11:53:07 kafka | security.inter.broker.protocol = PLAINTEXT 11:53:07 kafka | security.providers = null 11:53:07 kafka | socket.connection.setup.timeout.max.ms = 30000 11:53:07 kafka | socket.connection.setup.timeout.ms = 10000 11:53:07 kafka | socket.listen.backlog.size = 50 11:53:07 kafka | socket.receive.buffer.bytes = 102400 11:53:07 kafka | socket.request.max.bytes = 104857600 11:53:07 kafka | socket.send.buffer.bytes = 102400 11:53:07 kafka | ssl.cipher.suites = [] 11:53:07 kafka | ssl.client.auth = none 11:53:07 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:07 kafka | ssl.endpoint.identification.algorithm = https 11:53:07 kafka | ssl.engine.factory.class = null 11:53:07 kafka | ssl.key.password = null 11:53:07 kafka | ssl.keymanager.algorithm = SunX509 11:53:07 kafka | ssl.keystore.certificate.chain = null 11:53:07 kafka | ssl.keystore.key = null 11:53:07 kafka | ssl.keystore.location = null 11:53:07 kafka | ssl.keystore.password = null 11:53:07 kafka | ssl.keystore.type = JKS 11:53:07 kafka | ssl.principal.mapping.rules = DEFAULT 11:53:07 kafka | ssl.protocol = TLSv1.3 11:53:07 kafka | ssl.provider = null 11:53:07 kafka | ssl.secure.random.implementation = null 11:53:07 kafka | ssl.trustmanager.algorithm = PKIX 11:53:07 kafka | ssl.truststore.certificates = null 11:53:07 kafka | ssl.truststore.location = null 11:53:07 kafka | ssl.truststore.password = null 11:53:07 kafka | ssl.truststore.type = JKS 11:53:07 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 11:53:07 kafka | transaction.max.timeout.ms = 900000 11:53:07 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 11:53:07 kafka | transaction.state.log.load.buffer.size = 5242880 11:53:07 kafka | transaction.state.log.min.isr = 2 11:53:07 kafka | transaction.state.log.num.partitions = 50 11:53:07 kafka | transaction.state.log.replication.factor = 3 11:53:07 kafka | transaction.state.log.segment.bytes = 104857600 11:53:07 kafka | transactional.id.expiration.ms = 604800000 11:53:07 kafka | unclean.leader.election.enable = false 11:53:07 kafka | zookeeper.clientCnxnSocket = null 11:53:07 kafka | zookeeper.connect = zookeeper:2181 11:53:07 kafka | zookeeper.connection.timeout.ms = null 11:53:07 kafka | zookeeper.max.in.flight.requests = 10 11:53:07 kafka | zookeeper.metadata.migration.enable = false 11:53:07 kafka | zookeeper.session.timeout.ms = 18000 11:53:07 kafka | zookeeper.set.acl = false 11:53:07 kafka | zookeeper.ssl.cipher.suites = null 11:53:07 kafka | zookeeper.ssl.client.enable = false 11:53:07 kafka | zookeeper.ssl.crl.enable = false 11:53:07 kafka | zookeeper.ssl.enabled.protocols = null 11:53:07 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 11:53:07 kafka | zookeeper.ssl.keystore.location = null 11:53:07 kafka | zookeeper.ssl.keystore.password = null 11:53:07 kafka | zookeeper.ssl.keystore.type = null 11:53:07 kafka | zookeeper.ssl.ocsp.enable = false 11:53:07 kafka | zookeeper.ssl.protocol = TLSv1.2 11:53:07 kafka | zookeeper.ssl.truststore.location = null 11:53:07 kafka | zookeeper.ssl.truststore.password = null 11:53:07 kafka | zookeeper.ssl.truststore.type = null 11:53:07 kafka | (kafka.server.KafkaConfig) 11:53:07 kafka | [2025-06-17 11:47:00,777] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:53:07 kafka | [2025-06-17 11:47:00,777] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:53:07 kafka | [2025-06-17 11:47:00,777] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:53:07 kafka | [2025-06-17 11:47:00,777] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:53:07 kafka | [2025-06-17 11:47:00,811] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:00,813] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:00,825] INFO Loaded 0 logs in 14ms. (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:00,825] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:00,827] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:00,837] INFO Starting the log cleaner (kafka.log.LogCleaner) 11:53:07 kafka | [2025-06-17 11:47:00,876] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 11:53:07 kafka | [2025-06-17 11:47:00,893] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 11:53:07 kafka | [2025-06-17 11:47:00,904] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 11:53:07 kafka | [2025-06-17 11:47:00,941] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 11:53:07 kafka | [2025-06-17 11:47:01,271] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:53:07 kafka | [2025-06-17 11:47:01,274] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 11:53:07 kafka | [2025-06-17 11:47:01,296] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 11:53:07 kafka | [2025-06-17 11:47:01,297] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:53:07 kafka | [2025-06-17 11:47:01,297] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 11:53:07 kafka | [2025-06-17 11:47:01,301] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 11:53:07 kafka | [2025-06-17 11:47:01,306] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 11:53:07 kafka | [2025-06-17 11:47:01,325] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,333] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,336] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,337] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,358] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 11:53:07 kafka | [2025-06-17 11:47:01,385] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 11:53:07 kafka | [2025-06-17 11:47:01,416] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750160821399,1750160821399,1,0,0,72057603690528769,258,0,27 11:53:07 kafka | (kafka.zk.KafkaZkClient) 11:53:07 kafka | [2025-06-17 11:47:01,418] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 11:53:07 kafka | [2025-06-17 11:47:01,473] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 11:53:07 kafka | [2025-06-17 11:47:01,483] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,493] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 11:53:07 kafka | [2025-06-17 11:47:01,494] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,500] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,504] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,507] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,512] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 11:53:07 kafka | [2025-06-17 11:47:01,521] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:01,525] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:01,539] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 11:53:07 kafka | [2025-06-17 11:47:01,543] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 11:53:07 kafka | [2025-06-17 11:47:01,543] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 11:53:07 kafka | [2025-06-17 11:47:01,547] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 11:53:07 kafka | [2025-06-17 11:47:01,547] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,552] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,555] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,557] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,571] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,576] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:53:07 kafka | [2025-06-17 11:47:01,577] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,582] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 11:53:07 kafka | [2025-06-17 11:47:01,594] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,597] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 11:53:07 kafka | [2025-06-17 11:47:01,598] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 11:53:07 kafka | [2025-06-17 11:47:01,599] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,599] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,600] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,604] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,604] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,605] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,606] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 11:53:07 kafka | [2025-06-17 11:47:01,607] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,608] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 11:53:07 kafka | [2025-06-17 11:47:01,610] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:01,624] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,624] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 11:53:07 kafka | [2025-06-17 11:47:01,624] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 11:53:07 kafka | [2025-06-17 11:47:01,624] INFO Kafka startTimeMs: 1750160821613 (org.apache.kafka.common.utils.AppInfoParser) 11:53:07 kafka | [2025-06-17 11:47:01,624] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,626] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 11:53:07 kafka | [2025-06-17 11:47:01,635] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 11:53:07 kafka | [2025-06-17 11:47:01,636] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,638] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,638] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,641] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,651] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 11:53:07 kafka | [2025-06-17 11:47:01,652] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,658] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,658] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,659] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,659] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,660] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,676] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:01,705] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:01,711] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:53:07 kafka | [2025-06-17 11:47:01,756] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:53:07 kafka | [2025-06-17 11:47:06,678] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:06,679] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:35,504] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:53:07 kafka | [2025-06-17 11:47:35,505] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:35,509] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:53:07 kafka | [2025-06-17 11:47:35,512] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:35,550] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(jMVTtaXxQSCPs2-SiYbXoQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(vdtI5O6tRD2bEfSOaD9o4w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:35,551] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,557] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,557] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,564] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,565] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,565] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,565] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,565] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,690] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,691] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,692] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,693] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,695] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,695] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,695] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,695] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,696] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,697] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,698] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,699] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,700] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,701] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,701] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,701] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,701] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,701] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,701] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,702] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,704] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,706] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,707] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,708] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,709] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,710] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,710] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,713] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,714] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,714] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,714] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,714] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,714] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,714] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,715] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,716] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,717] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,751] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,752] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,753] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,754] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,754] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,754] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,755] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 11:53:07 kafka | [2025-06-17 11:47:35,755] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,801] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,811] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,813] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,814] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,815] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,829] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,830] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,830] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,830] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,830] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,838] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,839] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,839] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,839] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,839] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,846] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,847] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,847] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,847] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,847] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,854] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,855] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,855] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,855] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,855] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,864] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,865] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,865] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,865] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,865] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,873] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,873] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,874] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,874] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,874] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,882] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,883] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,883] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,883] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,883] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,891] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,892] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,892] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,892] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,892] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,900] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,900] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,901] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,901] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,901] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,915] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,916] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,916] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,917] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,917] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,926] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,927] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,927] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,927] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,927] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,935] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,936] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,936] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,937] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,937] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,944] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,945] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,945] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,945] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,945] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,953] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,954] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,954] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,955] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,955] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,961] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,961] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,961] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,962] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,962] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,969] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,969] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,969] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,970] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,970] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,977] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,978] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,978] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,978] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,978] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,985] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,986] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,986] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,986] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,987] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:35,993] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:35,994] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:35,994] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,994] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:35,994] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,001] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,002] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,002] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,002] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,002] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,009] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,010] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,010] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,011] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,011] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,017] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,018] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,018] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,018] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,018] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,025] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,027] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,027] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,027] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,028] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,035] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,035] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,036] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,036] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,036] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,043] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,044] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,044] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,044] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,044] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,053] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,053] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,053] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,053] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,054] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,062] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,063] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,063] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,063] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,063] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,071] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,073] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,073] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,073] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,073] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,080] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,080] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,081] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,081] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,081] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,088] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,088] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,088] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,089] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,089] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,096] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,096] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,096] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,096] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,096] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,102] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,102] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,102] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,102] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,102] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,108] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,108] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,108] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,108] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,108] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,113] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,114] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,114] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,114] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,114] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(jMVTtaXxQSCPs2-SiYbXoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,122] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,122] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,122] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,122] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,122] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,127] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,128] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,128] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,128] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,128] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,136] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,136] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,136] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,136] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,137] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,143] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,144] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,144] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,144] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,144] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,151] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,151] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,151] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,151] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,151] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,158] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,159] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,159] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,159] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,159] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,165] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,166] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,166] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,166] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,166] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,171] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,171] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,171] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,171] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,171] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,177] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,178] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,178] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,178] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,178] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,183] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,184] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,184] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,184] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,184] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,191] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,191] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,191] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,191] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,191] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,199] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,200] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,200] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,200] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,200] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,207] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,207] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,207] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,208] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,208] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,215] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,215] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,215] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,215] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,216] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,223] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,223] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,223] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,223] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,224] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,229] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:47:36,230] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:47:36,230] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,230] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:47:36,230] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(vdtI5O6tRD2bEfSOaD9o4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,234] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,235] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,239] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,244] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,245] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,246] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,247] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,250] INFO [Broker id=1] Finished LeaderAndIsr request in 538ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,252] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,253] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,254] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,254] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,254] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,254] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,255] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=vdtI5O6tRD2bEfSOaD9o4w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=jMVTtaXxQSCPs2-SiYbXoQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,257] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,257] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,263] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,263] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,263] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 17 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,264] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,265] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,266] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,266] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,267] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,267] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,268] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,268] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,269] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,269] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,269] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,269] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,270] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,270] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,270] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,271] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:47:36,271] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 24 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,272] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,272] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,273] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 26 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,273] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,273] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,273] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 26 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,274] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 27 milliseconds for epoch 0, of which 26 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,274] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,274] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,274] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,274] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,274] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,275] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,275] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,275] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,275] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,275] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,275] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:53:07 kafka | [2025-06-17 11:47:36,376] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 41725656-81de-4f00-877a-6abbfa57a523 in Empty state. Created a new member id consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,379] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,389] INFO [GroupCoordinator 1]: Preparing to rebalance group 41725656-81de-4f00-877a-6abbfa57a523 in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b with group instance id None; client reason: need to re-join with the given member-id: consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:36,389] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:39,403] INFO [GroupCoordinator 1]: Stabilized group 41725656-81de-4f00-877a-6abbfa57a523 generation 1 (__consumer_offsets-26) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:39,408] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:39,429] INFO [GroupCoordinator 1]: Assignment received from leader consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b for group 41725656-81de-4f00-877a-6abbfa57a523 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:47:39,429] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:48:20,693] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-22fc51a5-2e44-434b-a2ec-f18a7b8311af and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:48:20,694] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-22fc51a5-2e44-434b-a2ec-f18a7b8311af with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:48:23,697] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:48:23,701] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-22fc51a5-2e44-434b-a2ec-f18a7b8311af for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:49:31,386] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:53:07 kafka | [2025-06-17 11:49:31,395] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(hgAazTE_R5C6S95_6ydbQg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:49:31,395] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:49:31,395] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,395] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,396] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,415] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,415] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,415] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,415] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,415] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,415] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,417] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,417] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,418] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,418] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 11:53:07 kafka | [2025-06-17 11:49:31,418] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,421] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:53:07 kafka | [2025-06-17 11:49:31,422] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 11:53:07 kafka | [2025-06-17 11:49:31,423] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:49:31,423] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 11:53:07 kafka | [2025-06-17 11:49:31,423] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(hgAazTE_R5C6S95_6ydbQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,426] INFO [Broker id=1] Finished LeaderAndIsr request in 9ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,427] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=hgAazTE_R5C6S95_6ydbQg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,428] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,428] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 11:53:07 kafka | [2025-06-17 11:49:31,429] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:53:07 kafka | [2025-06-17 11:51:10,115] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-2578d3be-71e0-4a7a-9263-df6668bc4900 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:10,116] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-2578d3be-71e0-4a7a-9263-df6668bc4900 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:13,118] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:13,121] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-2578d3be-71e0-4a7a-9263-df6668bc4900 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:13,239] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-2578d3be-71e0-4a7a-9263-df6668bc4900 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:13,240] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:13,243] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-2578d3be-71e0-4a7a-9263-df6668bc4900, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:35,749] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-ea401dda-879d-4576-8a2e-a3d871d6723d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:35,750] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-ea401dda-879d-4576-8a2e-a3d871d6723d with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:38,751] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:38,754] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-ea401dda-879d-4576-8a2e-a3d871d6723d for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:38,761] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-ea401dda-879d-4576-8a2e-a3d871d6723d on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:38,761] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:51:38,762] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-ea401dda-879d-4576-8a2e-a3d871d6723d, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:01,256] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-06361b46-e76b-4d2c-95de-ddcaa022a40d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:01,257] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-06361b46-e76b-4d2c-95de-ddcaa022a40d with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:04,258] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:04,261] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-06361b46-e76b-4d2c-95de-ddcaa022a40d for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:04,267] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-06361b46-e76b-4d2c-95de-ddcaa022a40d on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:04,268] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:04,268] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-06361b46-e76b-4d2c-95de-ddcaa022a40d, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 11:53:07 kafka | [2025-06-17 11:52:06,682] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:52:06,682] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:52:06,686] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) 11:53:07 kafka | [2025-06-17 11:52:06,687] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) 11:53:08 policy-api | Waiting for policy-db-migrator port 6824... 11:53:08 policy-api | policy-db-migrator (172.17.0.7:6824) open 11:53:08 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 11:53:08 policy-api | 11:53:08 policy-api | . ____ _ __ _ _ 11:53:08 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:53:08 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:53:08 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:53:08 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 11:53:08 policy-api | =========|_|==============|___/=/_/_/_/ 11:53:08 policy-api | 11:53:08 policy-api | :: Spring Boot :: (v3.4.6) 11:53:08 policy-api | 11:53:08 policy-api | [2025-06-17T11:47:14.852+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 11:53:08 policy-api | [2025-06-17T11:47:14.915+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 39 (/app/api.jar started by policy in /opt/app/policy/api/bin) 11:53:08 policy-api | [2025-06-17T11:47:14.916+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 11:53:08 policy-api | [2025-06-17T11:47:16.344+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:53:08 policy-api | [2025-06-17T11:47:16.521+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 165 ms. Found 6 JPA repository interfaces. 11:53:08 policy-api | [2025-06-17T11:47:17.128+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 11:53:08 policy-api | [2025-06-17T11:47:17.139+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:53:08 policy-api | [2025-06-17T11:47:17.141+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:53:08 policy-api | [2025-06-17T11:47:17.141+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 11:53:08 policy-api | [2025-06-17T11:47:17.178+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 11:53:08 policy-api | [2025-06-17T11:47:17.178+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2207 ms 11:53:08 policy-api | [2025-06-17T11:47:17.511+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:53:08 policy-api | [2025-06-17T11:47:17.596+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 11:53:08 policy-api | [2025-06-17T11:47:17.645+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:53:08 policy-api | [2025-06-17T11:47:18.072+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:53:08 policy-api | [2025-06-17T11:47:18.113+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:53:08 policy-api | [2025-06-17T11:47:18.347+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c 11:53:08 policy-api | [2025-06-17T11:47:18.349+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:53:08 policy-api | [2025-06-17T11:47:18.428+00:00|INFO|pooling|main] HHH10001005: Database info: 11:53:08 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 11:53:08 policy-api | Database driver: undefined/unknown 11:53:08 policy-api | Database version: 16.4 11:53:08 policy-api | Autocommit mode: undefined/unknown 11:53:08 policy-api | Isolation level: undefined/unknown 11:53:08 policy-api | Minimum pool size: undefined/unknown 11:53:08 policy-api | Maximum pool size: undefined/unknown 11:53:08 policy-api | [2025-06-17T11:47:20.345+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:53:08 policy-api | [2025-06-17T11:47:20.348+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:53:08 policy-api | [2025-06-17T11:47:20.976+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 11:53:08 policy-api | [2025-06-17T11:47:21.806+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 11:53:08 policy-api | [2025-06-17T11:47:22.885+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:53:08 policy-api | [2025-06-17T11:47:22.928+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 11:53:08 policy-api | [2025-06-17T11:47:23.557+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 11:53:08 policy-api | [2025-06-17T11:47:23.705+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:53:08 policy-api | [2025-06-17T11:47:23.732+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 11:53:08 policy-api | [2025-06-17T11:47:23.753+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.556 seconds (process running for 10.173) 11:53:08 policy-api | [2025-06-17T11:47:39.919+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:53:08 policy-api | [2025-06-17T11:47:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 11:53:08 policy-api | [2025-06-17T11:47:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 11:53:08 policy-api | [2025-06-17T11:50:48.008+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: 11:53:08 policy-api | [] 11:53:08 policy-api | [2025-06-17T11:52:04.605+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID 11:53:08 policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity 11:53:08 policy-api | 11:53:08 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 11:53:08 policy-csit | Run Robot test 11:53:08 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 11:53:08 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 11:53:08 policy-csit | -v POLICY_API_IP:policy-api:6969 11:53:08 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 11:53:08 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 11:53:08 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 11:53:08 policy-csit | -v APEX_IP:policy-apex-pdp:6969 11:53:08 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 11:53:08 policy-csit | -v KAFKA_IP:kafka:9092 11:53:08 policy-csit | -v PROMETHEUS_IP:prometheus:9090 11:53:08 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 11:53:08 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 11:53:08 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 11:53:08 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 11:53:08 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 11:53:08 policy-csit | -v TEMP_FOLDER:/tmp/distribution 11:53:08 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 11:53:08 policy-csit | -v TEST_ENV:docker 11:53:08 policy-csit | -v JAEGER_IP:jaeger:16686 11:53:08 policy-csit | Starting Robot test suites ... 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidatesZonePolicy | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidatesVehiclePolicy | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidatesAbacPolicy | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 11:53:08 policy-csit | 5 tests, 5 passed, 0 failed 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 11:53:08 policy-csit | ------------------------------------------------------------------------------ 11:53:08 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 11:53:08 policy-csit | 5 tests, 5 passed, 0 failed 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 11:53:08 policy-csit | 10 tests, 10 passed, 0 failed 11:53:08 policy-csit | ============================================================================== 11:53:08 policy-csit | Output: /tmp/results/output.xml 11:53:08 policy-csit | Log: /tmp/results/log.html 11:53:08 policy-csit | Report: /tmp/results/report.html 11:53:08 policy-csit | RESULT: 0 11:53:08 policy-db-migrator | Waiting for postgres port 5432... 11:53:08 policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused 11:53:08 policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused 11:53:08 policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused 11:53:08 policy-db-migrator | Connection to postgres (172.17.0.4) 5432 port [tcp/postgresql] succeeded! 11:53:08 policy-db-migrator | Initializing policyadmin... 11:53:08 policy-db-migrator | 321 blocks 11:53:08 policy-db-migrator | Preparing upgrade release version: 0800 11:53:08 policy-db-migrator | Preparing upgrade release version: 0900 11:53:08 policy-db-migrator | Preparing upgrade release version: 1000 11:53:08 policy-db-migrator | Preparing upgrade release version: 1100 11:53:08 policy-db-migrator | Preparing upgrade release version: 1200 11:53:08 policy-db-migrator | Preparing upgrade release version: 1300 11:53:08 policy-db-migrator | Done 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | -------------+--------- 11:53:08 policy-db-migrator | policyadmin | 0 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:53:08 policy-db-migrator | (0 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | upgrade: 0 -> 1300 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0450-pdpgroup.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0470-pdp.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0570-toscadatatype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0630-toscanodetype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0660-toscaparameter.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0670-toscapolicies.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0690-toscapolicy.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0730-toscaproperty.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0770-toscarequirement.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0780-toscarequirements.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0820-toscatrigger.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-pdp.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0210-sequence.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0220-sequence.sql 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0120-toscatrigger.sql 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0140-toscaparameter.sql 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0150-toscaproperty.sql 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-upgrade.sql 11:53:08 policy-db-migrator | msg 11:53:08 policy-db-migrator | --------------------------- 11:53:08 policy-db-migrator | upgrade to 1100 completed 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:53:08 policy-db-migrator | DROP INDEX 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0120-audit_sequence.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 11:53:08 policy-db-migrator | DROP TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | policyadmin: OK: upgrade (1300) 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | -------------+--------- 11:53:08 policy-db-migrator | policyadmin | 1300 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:53:08 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.063273 11:53:08 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.105269 11:53:08 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.163651 11:53:08 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.216583 11:53:08 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.283657 11:53:08 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.341405 11:53:08 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.390357 11:53:08 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.436829 11:53:08 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.488414 11:53:08 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.535252 11:53:08 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.582345 11:53:08 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.630801 11:53:08 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.675231 11:53:08 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.721824 11:53:08 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.776176 11:53:08 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.832232 11:53:08 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.880889 11:53:08 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.931761 11:53:08 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:01.982896 11:53:08 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.034727 11:53:08 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.084158 11:53:08 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.142985 11:53:08 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.185887 11:53:08 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.228549 11:53:08 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.276706 11:53:08 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.32455 11:53:08 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.378388 11:53:08 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.434467 11:53:08 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.480956 11:53:08 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.535947 11:53:08 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.584742 11:53:08 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.654341 11:53:08 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.708028 11:53:08 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.783228 11:53:08 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.857842 11:53:08 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.92166 11:53:08 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:02.987729 11:53:08 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.042448 11:53:08 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.138983 11:53:08 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.217099 11:53:08 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.277249 11:53:08 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.351719 11:53:08 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.410026 11:53:08 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.464555 11:53:08 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.529453 11:53:08 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.592675 11:53:08 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.647972 11:53:08 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.70871 11:53:08 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.78189 11:53:08 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.837435 11:53:08 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.897427 11:53:08 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:03.952445 11:53:08 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.008954 11:53:08 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.078493 11:53:08 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.137117 11:53:08 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.206069 11:53:08 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.267911 11:53:08 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.324503 11:53:08 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.381362 11:53:08 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.440579 11:53:08 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.500277 11:53:08 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.561329 11:53:08 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.645989 11:53:08 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.721459 11:53:08 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.79668 11:53:08 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.867059 11:53:08 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:04.989459 11:53:08 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.076873 11:53:08 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.134413 11:53:08 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.19701 11:53:08 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.294044 11:53:08 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.362526 11:53:08 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.441953 11:53:08 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.504708 11:53:08 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.595586 11:53:08 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.696212 11:53:08 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.768602 11:53:08 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.836661 11:53:08 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.925039 11:53:08 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:05.978957 11:53:08 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.039325 11:53:08 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.105598 11:53:08 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.157756 11:53:08 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.230837 11:53:08 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.297548 11:53:08 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.351163 11:53:08 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.414271 11:53:08 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.464739 11:53:08 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.512821 11:53:08 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.560139 11:53:08 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.611764 11:53:08 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.662905 11:53:08 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.710929 11:53:08 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.757358 11:53:08 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.805886 11:53:08 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1706251147010800u | 1 | 2025-06-17 11:47:06.857909 11:53:08 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:06.905436 11:53:08 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:06.970541 11:53:08 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.02496 11:53:08 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.102008 11:53:08 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.159231 11:53:08 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.209434 11:53:08 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.257638 11:53:08 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.313477 11:53:08 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.36388 11:53:08 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.423918 11:53:08 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.477725 11:53:08 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.535154 11:53:08 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1706251147010900u | 1 | 2025-06-17 11:47:07.582301 11:53:08 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.638166 11:53:08 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.6898 11:53:08 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.740542 11:53:08 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.791884 11:53:08 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.841601 11:53:08 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.896299 11:53:08 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:07.950767 11:53:08 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:08.007073 11:53:08 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1706251147011000u | 1 | 2025-06-17 11:47:08.05956 11:53:08 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1706251147011100u | 1 | 2025-06-17 11:47:08.114368 11:53:08 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1706251147011200u | 1 | 2025-06-17 11:47:08.165847 11:53:08 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1706251147011200u | 1 | 2025-06-17 11:47:08.227988 11:53:08 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1706251147011200u | 1 | 2025-06-17 11:47:08.276329 11:53:08 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1706251147011200u | 1 | 2025-06-17 11:47:08.344766 11:53:08 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1706251147011300u | 1 | 2025-06-17 11:47:08.420849 11:53:08 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1706251147011300u | 1 | 2025-06-17 11:47:08.473207 11:53:08 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1706251147011300u | 1 | 2025-06-17 11:47:08.523557 11:53:08 policy-db-migrator | (126 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | policyadmin: OK @ 1300 11:53:08 policy-db-migrator | Initializing clampacm... 11:53:08 policy-db-migrator | 97 blocks 11:53:08 policy-db-migrator | Preparing upgrade release version: 1400 11:53:08 policy-db-migrator | Preparing upgrade release version: 1500 11:53:08 policy-db-migrator | Preparing upgrade release version: 1600 11:53:08 policy-db-migrator | Preparing upgrade release version: 1601 11:53:08 policy-db-migrator | Preparing upgrade release version: 1700 11:53:08 policy-db-migrator | Preparing upgrade release version: 1701 11:53:08 policy-db-migrator | Done 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | ----------+--------- 11:53:08 policy-db-migrator | clampacm | 0 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:53:08 policy-db-migrator | (0 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | upgrade: 0 -> 1701 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0500-participant.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0300-participantreplica.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0400-participant.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-automationcomposition.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-message.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-messagejob.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0200-automationcomposition.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0800-participantreplica.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | UPDATE 0 11:53:08 policy-db-migrator | ALTER TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | clampacm: OK: upgrade (1701) 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | ----------+--------- 11:53:08 policy-db-migrator | clampacm | 1701 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:53:08 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.186007 11:53:08 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.241702 11:53:08 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.299945 11:53:08 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.356233 11:53:08 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.412648 11:53:08 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.470821 11:53:08 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.524941 11:53:08 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.575598 11:53:08 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.627183 11:53:08 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.675008 11:53:08 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.726048 11:53:08 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.779202 11:53:08 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1706251147091400u | 1 | 2025-06-17 11:47:09.827208 11:53:08 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:09.881702 11:53:08 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:09.932173 11:53:08 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:09.985884 11:53:08 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:10.031938 11:53:08 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:10.079894 11:53:08 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:10.12915 11:53:08 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:10.197269 11:53:08 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1706251147091500u | 1 | 2025-06-17 11:47:10.245011 11:53:08 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1706251147091600u | 1 | 2025-06-17 11:47:10.292169 11:53:08 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1706251147091600u | 1 | 2025-06-17 11:47:10.346619 11:53:08 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1706251147091601u | 1 | 2025-06-17 11:47:10.397984 11:53:08 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1706251147091601u | 1 | 2025-06-17 11:47:10.448052 11:53:08 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1706251147091700u | 1 | 2025-06-17 11:47:10.508324 11:53:08 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1706251147091700u | 1 | 2025-06-17 11:47:10.561014 11:53:08 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1706251147091700u | 1 | 2025-06-17 11:47:10.613605 11:53:08 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.665197 11:53:08 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.711745 11:53:08 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.762729 11:53:08 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.817172 11:53:08 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.868159 11:53:08 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.923332 11:53:08 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:10.97562 11:53:08 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:11.028443 11:53:08 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1706251147091701u | 1 | 2025-06-17 11:47:11.080339 11:53:08 policy-db-migrator | (37 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | clampacm: OK @ 1701 11:53:08 policy-db-migrator | Initializing pooling... 11:53:08 policy-db-migrator | 4 blocks 11:53:08 policy-db-migrator | Preparing upgrade release version: 1600 11:53:08 policy-db-migrator | Done 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | ---------+--------- 11:53:08 policy-db-migrator | pooling | 0 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:53:08 policy-db-migrator | (0 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | pooling: upgrade available: 0 -> 1600 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | upgrade: 0 -> 1600 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-distributed.locking.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | pooling: OK: upgrade (1600) 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | ---------+--------- 11:53:08 policy-db-migrator | pooling | 1600 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:53:08 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1706251147111600u | 1 | 2025-06-17 11:47:11.744956 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | pooling: OK @ 1600 11:53:08 policy-db-migrator | Initializing operationshistory... 11:53:08 policy-db-migrator | 6 blocks 11:53:08 policy-db-migrator | Preparing upgrade release version: 1600 11:53:08 policy-db-migrator | Done 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | -------------------+--------- 11:53:08 policy-db-migrator | operationshistory | 0 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 11:53:08 policy-db-migrator | (0 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | upgrade: 0 -> 1600 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | rc=0 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | > upgrade 0110-operationshistory.sql 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | CREATE INDEX 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | INSERT 0 1 11:53:08 policy-db-migrator | operationshistory: OK: upgrade (1600) 11:53:08 policy-db-migrator | List of databases 11:53:08 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 11:53:08 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 11:53:08 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 11:53:08 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 11:53:08 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 11:53:08 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 11:53:08 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 11:53:08 policy-db-migrator | (9 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 11:53:08 policy-db-migrator | CREATE TABLE 11:53:08 policy-db-migrator | name | version 11:53:08 policy-db-migrator | -------------------+--------- 11:53:08 policy-db-migrator | operationshistory | 1600 11:53:08 policy-db-migrator | (1 row) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 11:53:08 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 11:53:08 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1706251147121600u | 1 | 2025-06-17 11:47:12.3763 11:53:08 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1706251147121600u | 1 | 2025-06-17 11:47:12.442387 11:53:08 policy-db-migrator | (2 rows) 11:53:08 policy-db-migrator | 11:53:08 policy-db-migrator | operationshistory: OK @ 1600 11:53:08 policy-opa-pdp | Waiting for kafka port 9092... 11:53:08 policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! 11:53:08 policy-opa-pdp | Waiting for pap port 6969... 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 11:53:08 policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=debug msg="###################################### " 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=debug msg="OPA-PDP: Starting initialisation " 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=debug msg="###################################### " 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="KAFKA_URL not defined, using default value" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="PAP_TOPIC not defined, using default value" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="PATCH_TOPIC not defined, using default value" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="PATCH_GROUPID not defined, using default value" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="API_USER not defined, using default value" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="API_PASSWORD not defined, using default value" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="UseSASLForKAFKA not defined, using default value" 11:53:08 policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=debug msg="Username: " 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=debug msg="Password: " 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" 11:53:08 policy-opa-pdp | time="2025-06-17T11:48:15Z" level=debug msg="Configuration module: environment initialised" 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:15.6585+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:15.6590+00:00] Name: opa-34bdbe81-f424-4a91-9535-1955322e40a7 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:15.6623+00:00] Starting OPA PDP Service 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:48:20.6667+00:00] HTTP server started 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:20.6680+00:00] Create an instance of OPA Object 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:20.6680+00:00] Configure an instance of OPA Object 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:20.6691+00:00] Topic start :::: policy-pdp-pap 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:20.6692+00:00] Creating Kafka Consumer singleton instance 11:53:08 policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-17T11:48:20.6719+00:00] Topic Subscribed: policy-pdp-pap 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:20.6719+00:00] Created SIngleton consumer instance 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:20.6876+00:00] Starting PDP Message Listener..... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:30.6968+00:00] New Ticker started with interval 60000 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:48:40.7051+00:00] After registration successful delay 11:53:08 policy-opa-pdp | 2025/06/17 11:49:30 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:30.7002+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"681a42b2-1b8f-43e7-a089-3d3896aa8d81","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750160970699","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:30.7003+00:00] Sending Heartbeat ... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:30.7244+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"681a42b2-1b8f-43e7-a089-3d3896aa8d81","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750160970699","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:30.7246+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:30.7246+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3146+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","timestampMs":1750160971249,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3149+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3151+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","timestampMs":1750160971249,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3151+00:00] Policy Is Allowed: slice.capacity.check 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3151+00:00] Validating properties data for policy: slice.capacity.check 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3151+00:00] Validating properties policy for policy: slice.capacity.check 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3151+00:00] Validation successful for policy: slice.capacity.check 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3154+00:00] Directory created: /opt/policies/slice/capacity/check 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3155+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3156+00:00] Directory created: /opt/data/node/slice/capacity/check 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3156+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3156+00:00] Before calling combinedoutput 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3357+00:00] Bundle Built Sucessfully.... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3390+00:00] storage not found creating : /node 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3391+00:00] storage not found creating : /node/slice 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3391+00:00] storage not found creating : /node/slice/capacity 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3392+00:00] storage not found creating : /node/slice/capacity/check 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3393+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3394+00:00] Loaded Policy: slice.capacity.check 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3395+00:00] Processed policies_to_be_deployed successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3395+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | 2025/06/17 11:49:31 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3397+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"65f9852d-b34f-4cbc-b65e-a8b557f4839e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971339","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3398+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3398+00:00] 120000 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3399+00:00] New Ticker started with interval 120000 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3508+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"65f9852d-b34f-4cbc-b65e-a8b557f4839e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971339","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3509+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3509+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3847+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1b5e2ed8-aaa0-418a-95ce-396273388e73","timestampMs":1750160971250,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3848+00:00] messageType: PDP_STATE_CHANGE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3849+00:00] PDP STATE CHANGE message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1b5e2ed8-aaa0-418a-95ce-396273388e73","timestampMs":1750160971250,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3849+00:00] State change from PASSIVE To : ACTIVE 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3850+00:00] Sending PDP Status With State Change response 11:53:08 policy-opa-pdp | 2025/06/17 11:49:31 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3851+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"1b5e2ed8-aaa0-418a-95ce-396273388e73","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e30aecee-4a83-4946-bcfd-71dd57bacda4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971385","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.3851+00:00] PDP_STATUS With State Change Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3930+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"1b5e2ed8-aaa0-418a-95ce-396273388e73","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e30aecee-4a83-4946-bcfd-71dd57bacda4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971385","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3930+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.3930+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6617+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","timestampMs":1750160971649,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6618+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6621+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","timestampMs":1750160971649,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.6622+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | 2025/06/17 11:49:31 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6625+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"6daa9b58-ec14-459a-8cfa-91b02f94b2f8","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971662","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:49:31.6626+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6626+00:00] 120000 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6692+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"6daa9b58-ec14-459a-8cfa-91b02f94b2f8","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971662","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6693+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:49:31.6693+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | 2025/06/17 11:50:30 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:30.7029+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"9e2d1e83-d3d3-4e75-9aa2-4f0ce539bcfb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161030702","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:30.7033+00:00] Sending Heartbeat ... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:30.7119+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"9e2d1e83-d3d3-4e75-9aa2-4f0ce539bcfb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161030702","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:30.7120+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:30.7120+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | WARN[2025-06-17T11:50:47.7870+00:00] Invalid or Missing Request ID 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:47.7871+00:00] Received Health Check message 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:47.7942+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:47.7943+00:00] datapath to get Data : / 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:47.7944+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0765+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4c397d61-9c0d-43de-a88a-03760b13f8d4","timestampMs":1750161049023,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0766+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0770+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4c397d61-9c0d-43de-a88a-03760b13f8d4","timestampMs":1750161049023,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0770+00:00] Check if Policy is Already Deployed: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.0771+00:00] Policy is new and should be deployed: zoneB 1.0.6 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0771+00:00] Policy Is Allowed: zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0771+00:00] Validating properties data for policy: zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0771+00:00] Validating properties policy for policy: zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.0772+00:00] Validation successful for policy: zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.0773+00:00] Directory created: /opt/policies/zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.0774+00:00] Policy file saved: /opt/policies/zoneB/policy.rego 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.0775+00:00] Directory created: /opt/data/node/zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.0776+00:00] Data file saved: /opt/data/node/zoneB/data.json 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.0776+00:00] Before calling combinedoutput 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1000+00:00] Bundle Built Sucessfully.... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1033+00:00] storage not found creating : /node/zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.1034+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.zoneB" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "zoneB" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "zoneB", 11:53:08 policy-opa-pdp | "policy-version": "1.0.6" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1034+00:00] Loaded Policy: zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.1034+00:00] Processed policies_to_be_deployed successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.1035+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | 2025/06/17 11:50:49 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1036+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"4c397d61-9c0d-43de-a88a-03760b13f8d4","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"f1752940-7df0-42dd-83eb-8a6f9bffd0e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161049103","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:50:49.1036+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1036+00:00] 0 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1149+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"4c397d61-9c0d-43de-a88a-03760b13f8d4","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"f1752940-7df0-42dd-83eb-8a6f9bffd0e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161049103","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1150+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:50:49.1150+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.2647+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2648+00:00] datapath to get Data : /node/zoneB/zone 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2648+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2752+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2753+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2755+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2756+00:00] SDK making a decision 11:53:08 policy-opa-pdp | {"decision_id":"fa2201b9-8519-43d7-97de-6133f3a9428b","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"869e472d-7165-4b5e-94a9-64cc59ee02c3","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":570,"timer_rego_query_compile_ns":97591,"timer_rego_query_eval_ns":407356,"timer_rego_query_parse_ns":77811,"timer_sdk_decision_eval_ns":743091},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-17T11:51:13Z","timestamp":"2025-06-17T11:51:13.275637701Z","type":"openpolicyagent.org/decision_logs"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2768+00:00] RAW opa Decision output: 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "ID": "fa2201b9-8519-43d7-97de-6133f3a9428b", 11:53:08 policy-opa-pdp | "Result": { 11:53:08 policy-opa-pdp | "action_is_log_view": true, 11:53:08 policy-opa-pdp | "allow": true, 11:53:08 policy-opa-pdp | "has_zone_access": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "access": "granted", 11:53:08 policy-opa-pdp | "user": "user1" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | "Provenance": { 11:53:08 policy-opa-pdp | "version": "1.1.0", 11:53:08 policy-opa-pdp | "build_commit": "", 11:53:08 policy-opa-pdp | "build_timestamp": "", 11:53:08 policy-opa-pdp | "build_hostname": "" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2846+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2847+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2850+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | WARN[2025-06-17T11:51:13.2850+00:00] Policy Name zoeB does not exist 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2927+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2928+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2932+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2933+00:00] SDK making a decision 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.2941+00:00] RAW opa Decision output: 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "ID": "835e7b71-4a53-41a9-9393-d2ba96accd66", 11:53:08 policy-opa-pdp | "Result": { 11:53:08 policy-opa-pdp | "action_is_log_view": true, 11:53:08 policy-opa-pdp | "allow": true, 11:53:08 policy-opa-pdp | "has_zone_access": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "access": "granted", 11:53:08 policy-opa-pdp | "user": "user1" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | "Provenance": { 11:53:08 policy-opa-pdp | "version": "1.1.0", 11:53:08 policy-opa-pdp | "build_commit": "", 11:53:08 policy-opa-pdp | "build_timestamp": "", 11:53:08 policy-opa-pdp | "build_hostname": "" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | {"decision_id":"835e7b71-4a53-41a9-9393-d2ba96accd66","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"869e472d-7165-4b5e-94a9-64cc59ee02c3","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":850,"timer_rego_query_eval_ns":476196,"timer_sdk_decision_eval_ns":589379},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-17T11:51:13Z","timestamp":"2025-06-17T11:51:13.293362841Z","type":"openpolicyagent.org/decision_logs"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5739+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"247f4b5d-dde8-441b-8b74-988c71a54520","timestampMs":1750161073546,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5740+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5741+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"247f4b5d-dde8-441b-8b74-988c71a54520","timestampMs":1750161073546,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.5742+00:00] Found Policies to be undeployed 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.5742+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5742+00:00] Deleting Policy from OPA : /zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5770+00:00] Removing policy directory: /opt/policies/zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5773+00:00] Deleting data from OPA : /node/zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5773+00:00] Analyzing dataPath: /node/zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5773+00:00] Path segments: [ node zoneB] 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5773+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5773+00:00] Removing data directory: /opt/data/node/zoneB 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.5775+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5775+00:00] Policies Map After Undeployment : { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.5775+00:00] Processed policies_to_be_undeployed successfully 11:53:08 policy-opa-pdp | 2025/06/17 11:51:13 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.5776+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5776+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"247f4b5d-dde8-441b-8b74-988c71a54520","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"aba334fb-e8e8-44f5-84a4-48ece5224c79","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161073577","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:13.5776+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5776+00:00] 0 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5851+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"247f4b5d-dde8-441b-8b74-988c71a54520","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"aba334fb-e8e8-44f5-84a4-48ece5224c79","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161073577","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5852+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:13.5852+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7130+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fd848027-65b5-4ee2-97e4-134b66d98a26","timestampMs":1750161074693,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7131+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7133+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fd848027-65b5-4ee2-97e4-134b66d98a26","timestampMs":1750161074693,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7133+00:00] Check if Policy is Already Deployed: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7133+00:00] Policy is new and should be deployed: vehicle 1.0.6 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7134+00:00] Policy Is Allowed: vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7134+00:00] Validating properties data for policy: vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7134+00:00] Validating properties policy for policy: vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7134+00:00] Validation successful for policy: vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7135+00:00] Directory created: /opt/policies/vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7136+00:00] Policy file saved: /opt/policies/vehicle/policy.rego 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7136+00:00] Directory created: /opt/data/node/vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7136+00:00] Data file saved: /opt/data/node/vehicle/data.json 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7137+00:00] Before calling combinedoutput 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7350+00:00] Bundle Built Sucessfully.... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7407+00:00] storage not found creating : /node/vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7409+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.vehicle" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "vehicle" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "vehicle", 11:53:08 policy-opa-pdp | "policy-version": "1.0.6" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7409+00:00] Loaded Policy: vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7409+00:00] Processed policies_to_be_deployed successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7410+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | 2025/06/17 11:51:14 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7411+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"fd848027-65b5-4ee2-97e4-134b66d98a26","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"95b70497-bf79-40cc-9b11-cc749944a7db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161074741","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:14.7411+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7412+00:00] 0 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7484+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"fd848027-65b5-4ee2-97e4-134b66d98a26","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"95b70497-bf79-40cc-9b11-cc749944a7db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161074741","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7485+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:14.7485+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | 2025/06/17 11:51:31 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:31.3494+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"459a7479-a768-40b0-831a-da4dd3106f7c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161091349","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:31.3494+00:00] Sending Heartbeat ... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:31.3575+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"459a7479-a768-40b0-831a-da4dd3106f7c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161091349","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:31.3575+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:31.3576+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7807+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7808+00:00] datapath to get Data : /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7808+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7909+00:00] PDP received a request to update data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7913+00:00] All fields are valid! 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7915+00:00] data : [map[op:add path:/round value:trail]] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7915+00:00] policy name : vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7917+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7917+00:00] dirParts : [ node vehicle] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7921+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7921+00:00] root: /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7923+00:00] path : round 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7923+00:00] calling ParsePatchPathEscaped to check the path 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7924+00:00] No path conflicts detected 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7925+00:00] Updated the data in the corresponding path successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.7995+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7995+00:00] datapath to get Data : /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.7996+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8112+00:00] PDP received a request to update data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8116+00:00] All fields are valid! 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8117+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8117+00:00] policy name : vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8117+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8118+00:00] dirParts : [ node vehicle] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8118+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8118+00:00] root: /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8118+00:00] path : round 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8118+00:00] calling ParsePatchPathEscaped to check the path 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8119+00:00] No path conflicts detected 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8119+00:00] Updated the data in the corresponding path successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8186+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8187+00:00] datapath to get Data : /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8189+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8292+00:00] PDP received a request to update data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8296+00:00] All fields are valid! 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8296+00:00] data : [map[op:remove path:/round]] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8296+00:00] policy name : vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8297+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8297+00:00] dirParts : [ node vehicle] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8298+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8298+00:00] root: /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8298+00:00] path : round 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8298+00:00] calling ParsePatchPathEscaped to check the path 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8298+00:00] No path conflicts detected 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8299+00:00] Updated the data in the corresponding path successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:38.8365+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8365+00:00] datapath to get Data : /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8366+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8461+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8462+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8465+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8465+00:00] SDK making a decision 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8481+00:00] RAW opa Decision output: 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "ID": "e13a4a4a-caf5-4c4a-b5e9-f9e36ff468a9", 11:53:08 policy-opa-pdp | "Result": { 11:53:08 policy-opa-pdp | "action_is_granted": true, 11:53:08 policy-opa-pdp | "allow": true, 11:53:08 policy-opa-pdp | "user_has_vehicle_access": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "status": "available", 11:53:08 policy-opa-pdp | "type": "car" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | "Provenance": { 11:53:08 policy-opa-pdp | "version": "1.1.0", 11:53:08 policy-opa-pdp | "build_commit": "", 11:53:08 policy-opa-pdp | "build_timestamp": "", 11:53:08 policy-opa-pdp | "build_hostname": "" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | {"decision_id":"e13a4a4a-caf5-4c4a-b5e9-f9e36ff468a9","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"869e472d-7165-4b5e-94a9-64cc59ee02c3","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":760,"timer_rego_query_compile_ns":192063,"timer_rego_query_eval_ns":457156,"timer_rego_query_parse_ns":110161,"timer_sdk_decision_eval_ns":923973},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-17T11:51:38Z","timestamp":"2025-06-17T11:51:38.846637001Z","type":"openpolicyagent.org/decision_logs"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8571+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8571+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8574+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | WARN[2025-06-17T11:51:38.8576+00:00] Policy Name vehile does not exist 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8647+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8648+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8650+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8650+00:00] SDK making a decision 11:53:08 policy-opa-pdp | {"decision_id":"2b99f701-c12a-42fd-a822-22cbb4ca060b","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"869e472d-7165-4b5e-94a9-64cc59ee02c3","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":720,"timer_rego_query_eval_ns":450617,"timer_sdk_decision_eval_ns":538458},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-17T11:51:38Z","timestamp":"2025-06-17T11:51:38.865118245Z","type":"openpolicyagent.org/decision_logs"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:38.8658+00:00] RAW opa Decision output: 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "ID": "2b99f701-c12a-42fd-a822-22cbb4ca060b", 11:53:08 policy-opa-pdp | "Result": { 11:53:08 policy-opa-pdp | "action_is_granted": true, 11:53:08 policy-opa-pdp | "allow": true, 11:53:08 policy-opa-pdp | "user_has_vehicle_access": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "status": "available", 11:53:08 policy-opa-pdp | "type": "car" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | "Provenance": { 11:53:08 policy-opa-pdp | "version": "1.1.0", 11:53:08 policy-opa-pdp | "build_commit": "", 11:53:08 policy-opa-pdp | "build_timestamp": "", 11:53:08 policy-opa-pdp | "build_hostname": "" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1339+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"3b88478e-4d54-486a-bc73-01095df1d796","timestampMs":1750161099106,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1340+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1342+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"3b88478e-4d54-486a-bc73-01095df1d796","timestampMs":1750161099106,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.1342+00:00] Found Policies to be undeployed 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.1342+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1342+00:00] Deleting Policy from OPA : /vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1367+00:00] Removing policy directory: /opt/policies/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1369+00:00] Deleting data from OPA : /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1369+00:00] Analyzing dataPath: /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1369+00:00] Path segments: [ node vehicle] 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1371+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1371+00:00] Removing data directory: /opt/data/node/vehicle 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.1373+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1373+00:00] Policies Map After Undeployment : { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.1374+00:00] Processed policies_to_be_undeployed successfully 11:53:08 policy-opa-pdp | 2025/06/17 11:51:39 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.1374+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1374+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"3b88478e-4d54-486a-bc73-01095df1d796","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"bb65420d-8fbc-45ad-a1b7-dff35a189367","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161099137","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.1375+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1375+00:00] 0 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1448+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"3b88478e-4d54-486a-bc73-01095df1d796","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"bb65420d-8fbc-45ad-a1b7-dff35a189367","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161099137","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1448+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.1449+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.4926+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.4926+00:00] datapath to get Data : /node/vehicle 11:53:08 policy-opa-pdp | WARN[2025-06-17T11:51:39.4926+00:00] Error in reading data under /node/vehicle path 11:53:08 policy-opa-pdp | ERRO[2025-06-17T11:51:39.4927+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.5029+00:00] PDP received a request to update data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.5032+00:00] All fields are valid! 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.5032+00:00] data : [map[op:remove path:/round]] 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:39.5032+00:00] policy name : vehicle 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:39.5033+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] 11:53:08 policy-opa-pdp | ERRO[2025-06-17T11:51:39.5033+00:00] Policy associated with the patch request does not exists 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2152+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c22c4231-ead1-4117-b32a-b0d7b235ec15","timestampMs":1750161100194,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2153+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2155+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c22c4231-ead1-4117-b32a-b0d7b235ec15","timestampMs":1750161100194,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2155+00:00] Check if Policy is Already Deployed: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2155+00:00] Policy is new and should be deployed: abac 1.0.7 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2156+00:00] Policy Is Allowed: abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2156+00:00] Validating properties data for policy: abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2156+00:00] Validating properties policy for policy: abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2156+00:00] Validation successful for policy: abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2157+00:00] Directory created: /opt/policies/abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2158+00:00] Policy file saved: /opt/policies/abac/policy.rego 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2158+00:00] Directory created: /opt/data/node/abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2159+00:00] Data file saved: /opt/data/node/abac/data.json 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2159+00:00] Before calling combinedoutput 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2345+00:00] Bundle Built Sucessfully.... 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2387+00:00] storage not found creating : /node/abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2389+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.abac" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "abac" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "abac", 11:53:08 policy-opa-pdp | "policy-version": "1.0.7" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2389+00:00] Loaded Policy: abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2389+00:00] Processed policies_to_be_deployed successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2390+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | 2025/06/17 11:51:40 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2391+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c22c4231-ead1-4117-b32a-b0d7b235ec15","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e2a15dbb-e3f8-48c1-858f-9747e6e552c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161100239","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:51:40.2391+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2391+00:00] 0 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2470+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c22c4231-ead1-4117-b32a-b0d7b235ec15","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e2a15dbb-e3f8-48c1-858f-9747e6e552c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161100239","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2471+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:51:40.2472+00:00] discarding event of type PDP_STATUS 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.2920+00:00] PDP received a request to get data through API 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.2926+00:00] datapath to get Data : /node/abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.2927+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3071+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3072+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3075+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3075+00:00] SDK making a decision 11:53:08 policy-opa-pdp | {"decision_id":"e8caef0b-1e17-41dc-aa96-fed2d0520889","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"869e472d-7165-4b5e-94a9-64cc59ee02c3","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":730,"timer_rego_query_compile_ns":184003,"timer_rego_query_eval_ns":864542,"timer_rego_query_parse_ns":116641,"timer_sdk_decision_eval_ns":1392720},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-17T11:52:04Z","timestamp":"2025-06-17T11:52:04.307640643Z","type":"openpolicyagent.org/decision_logs"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3096+00:00] RAW opa Decision output: 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "ID": "e8caef0b-1e17-41dc-aa96-fed2d0520889", 11:53:08 policy-opa-pdp | "Result": { 11:53:08 policy-opa-pdp | "action_is_read": true, 11:53:08 policy-opa-pdp | "allow": true, 11:53:08 policy-opa-pdp | "viewable_sensor_data": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Galle", 11:53:08 policy-opa-pdp | "precipitation": "500 mm", 11:53:08 policy-opa-pdp | "temperature": "35 C", 11:53:08 policy-opa-pdp | "windspeed": "7.2 m/s" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Jaffna", 11:53:08 policy-opa-pdp | "precipitation": "300 mm", 11:53:08 policy-opa-pdp | "temperature": "-5 C", 11:53:08 policy-opa-pdp | "windspeed": "3.8 m/s" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Nuwara Eliya", 11:53:08 policy-opa-pdp | "precipitation": "600 mm", 11:53:08 policy-opa-pdp | "temperature": "25 C", 11:53:08 policy-opa-pdp | "windspeed": "4.0 m/s" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Trincomalee", 11:53:08 policy-opa-pdp | "precipitation": "1000 mm", 11:53:08 policy-opa-pdp | "temperature": "20 C", 11:53:08 policy-opa-pdp | "windspeed": "5.0 m/s" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | "Provenance": { 11:53:08 policy-opa-pdp | "version": "1.1.0", 11:53:08 policy-opa-pdp | "build_commit": "", 11:53:08 policy-opa-pdp | "build_timestamp": "", 11:53:08 policy-opa-pdp | "build_hostname": "" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3172+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3174+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3177+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | WARN[2025-06-17T11:52:04.3178+00:00] Policy Name abc does not exist 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3275+00:00] PDP received a decision request. 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3276+00:00] Headers processed for requestId: Unknown 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3277+00:00] Validation successful for request fields 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3278+00:00] SDK making a decision 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.3285+00:00] RAW opa Decision output: 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "ID": "51d24ef9-8c65-4b69-b275-cf1cd8fa4f40", 11:53:08 policy-opa-pdp | "Result": { 11:53:08 policy-opa-pdp | "action_is_read": true, 11:53:08 policy-opa-pdp | "allow": true, 11:53:08 policy-opa-pdp | "viewable_sensor_data": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Galle", 11:53:08 policy-opa-pdp | "precipitation": "500 mm", 11:53:08 policy-opa-pdp | "temperature": "35 C", 11:53:08 policy-opa-pdp | "windspeed": "7.2 m/s" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Jaffna", 11:53:08 policy-opa-pdp | "precipitation": "300 mm", 11:53:08 policy-opa-pdp | "temperature": "-5 C", 11:53:08 policy-opa-pdp | "windspeed": "3.8 m/s" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Nuwara Eliya", 11:53:08 policy-opa-pdp | "precipitation": "600 mm", 11:53:08 policy-opa-pdp | "temperature": "25 C", 11:53:08 policy-opa-pdp | "windspeed": "4.0 m/s" 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "location": "Trincomalee", 11:53:08 policy-opa-pdp | "precipitation": "1000 mm", 11:53:08 policy-opa-pdp | "temperature": "20 C", 11:53:08 policy-opa-pdp | "windspeed": "5.0 m/s" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | }, 11:53:08 policy-opa-pdp | "Provenance": { 11:53:08 policy-opa-pdp | "version": "1.1.0", 11:53:08 policy-opa-pdp | "build_commit": "", 11:53:08 policy-opa-pdp | "build_timestamp": "", 11:53:08 policy-opa-pdp | "build_hostname": "" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | {"decision_id":"51d24ef9-8c65-4b69-b275-cf1cd8fa4f40","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"869e472d-7165-4b5e-94a9-64cc59ee02c3","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":500,"timer_rego_query_eval_ns":419445,"timer_sdk_decision_eval_ns":489276},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-17T11:52:04Z","timestamp":"2025-06-17T11:52:04.327869737Z","type":"openpolicyagent.org/decision_logs"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8844+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"09ce30f1-e49a-4079-9fe9-9622bda9e261","timestampMs":1750161124868,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8845+00:00] messageType: PDP_UPDATE 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8846+00:00] PDP_UPDATE Message received: {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"09ce30f1-e49a-4079-9fe9-9622bda9e261","timestampMs":1750161124868,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.8846+00:00] Found Policies to be undeployed 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.8846+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8846+00:00] Deleting Policy from OPA : /abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8862+00:00] Removing policy directory: /opt/policies/abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8864+00:00] Deleting data from OPA : /node/abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8864+00:00] Analyzing dataPath: /node/abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8864+00:00] Path segments: [ node abac] 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8864+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8864+00:00] Removing data directory: /opt/data/node/abac 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.8866+00:00] PoliciesDeployed Map: { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8866+00:00] Policies Map After Undeployment : { 11:53:08 policy-opa-pdp | "deployed_policies_dict": [ 11:53:08 policy-opa-pdp | { 11:53:08 policy-opa-pdp | "data": [ 11:53:08 policy-opa-pdp | "node.slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy": [ 11:53:08 policy-opa-pdp | "slice.capacity.check" 11:53:08 policy-opa-pdp | ], 11:53:08 policy-opa-pdp | "policy-id": "slice.capacity.check", 11:53:08 policy-opa-pdp | "policy-version": "1.0.0" 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | ] 11:53:08 policy-opa-pdp | } 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.8866+00:00] Processed policies_to_be_undeployed successfully 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.8866+00:00] Sending PDP Status With Update Response 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8867+00:00] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09ce30f1-e49a-4079-9fe9-9622bda9e261","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"c0d5b3a4-f81f-4a70-8251-27b7727a4894","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161124886","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | 2025/06/17 11:52:04 KafkaProducer or producer produce message 11:53:08 policy-opa-pdp | INFO[2025-06-17T11:52:04.8868+00:00] PDP_STATUS Message Sent Successfully 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8868+00:00] 0 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8944+00:00] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09ce30f1-e49a-4079-9fe9-9622bda9e261","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"c0d5b3a4-f81f-4a70-8251-27b7727a4894","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161124886","deploymentInstanceInfo":""} 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8944+00:00] messageType: PDP_STATUS 11:53:08 policy-opa-pdp | DEBU[2025-06-17T11:52:04.8944+00:00] discarding event of type PDP_STATUS 11:53:08 policy-pap | Waiting for api port 6969... 11:53:08 policy-pap | api (172.17.0.8:6969) open 11:53:08 policy-pap | Waiting for kafka port 9092... 11:53:08 policy-pap | kafka (172.17.0.5:9092) open 11:53:08 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 11:53:08 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 11:53:08 policy-pap | 11:53:08 policy-pap | . ____ _ __ _ _ 11:53:08 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:53:08 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:53:08 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:53:08 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 11:53:08 policy-pap | =========|_|==============|___/=/_/_/_/ 11:53:08 policy-pap | 11:53:08 policy-pap | :: Spring Boot :: (v3.4.6) 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:26.203+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 60 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 11:53:08 policy-pap | [2025-06-17T11:47:26.205+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 11:53:08 policy-pap | [2025-06-17T11:47:27.567+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:53:08 policy-pap | [2025-06-17T11:47:27.658+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 7 JPA repository interfaces. 11:53:08 policy-pap | [2025-06-17T11:47:28.647+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 11:53:08 policy-pap | [2025-06-17T11:47:28.659+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:53:08 policy-pap | [2025-06-17T11:47:28.661+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:53:08 policy-pap | [2025-06-17T11:47:28.661+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 11:53:08 policy-pap | [2025-06-17T11:47:28.718+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 11:53:08 policy-pap | [2025-06-17T11:47:28.719+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2461 ms 11:53:08 policy-pap | [2025-06-17T11:47:29.139+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:53:08 policy-pap | [2025-06-17T11:47:29.221+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 11:53:08 policy-pap | [2025-06-17T11:47:29.275+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:53:08 policy-pap | [2025-06-17T11:47:29.683+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:53:08 policy-pap | [2025-06-17T11:47:29.730+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:53:08 policy-pap | [2025-06-17T11:47:29.960+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd 11:53:08 policy-pap | [2025-06-17T11:47:29.962+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:53:08 policy-pap | [2025-06-17T11:47:30.054+00:00|INFO|pooling|main] HHH10001005: Database info: 11:53:08 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 11:53:08 policy-pap | Database driver: undefined/unknown 11:53:08 policy-pap | Database version: 16.4 11:53:08 policy-pap | Autocommit mode: undefined/unknown 11:53:08 policy-pap | Isolation level: undefined/unknown 11:53:08 policy-pap | Minimum pool size: undefined/unknown 11:53:08 policy-pap | Maximum pool size: undefined/unknown 11:53:08 policy-pap | [2025-06-17T11:47:31.944+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:53:08 policy-pap | [2025-06-17T11:47:31.948+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:53:08 policy-pap | [2025-06-17T11:47:33.200+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:08 policy-pap | allow.auto.create.topics = true 11:53:08 policy-pap | auto.commit.interval.ms = 5000 11:53:08 policy-pap | auto.include.jmx.reporter = true 11:53:08 policy-pap | auto.offset.reset = latest 11:53:08 policy-pap | bootstrap.servers = [kafka:9092] 11:53:08 policy-pap | check.crcs = true 11:53:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:08 policy-pap | client.id = consumer-41725656-81de-4f00-877a-6abbfa57a523-1 11:53:08 policy-pap | client.rack = 11:53:08 policy-pap | connections.max.idle.ms = 540000 11:53:08 policy-pap | default.api.timeout.ms = 60000 11:53:08 policy-pap | enable.auto.commit = true 11:53:08 policy-pap | enable.metrics.push = true 11:53:08 policy-pap | exclude.internal.topics = true 11:53:08 policy-pap | fetch.max.bytes = 52428800 11:53:08 policy-pap | fetch.max.wait.ms = 500 11:53:08 policy-pap | fetch.min.bytes = 1 11:53:08 policy-pap | group.id = 41725656-81de-4f00-877a-6abbfa57a523 11:53:08 policy-pap | group.instance.id = null 11:53:08 policy-pap | group.protocol = classic 11:53:08 policy-pap | group.remote.assignor = null 11:53:08 policy-pap | heartbeat.interval.ms = 3000 11:53:08 policy-pap | interceptor.classes = [] 11:53:08 policy-pap | internal.leave.group.on.close = true 11:53:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:08 policy-pap | isolation.level = read_uncommitted 11:53:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | max.partition.fetch.bytes = 1048576 11:53:08 policy-pap | max.poll.interval.ms = 300000 11:53:08 policy-pap | max.poll.records = 500 11:53:08 policy-pap | metadata.max.age.ms = 300000 11:53:08 policy-pap | metadata.recovery.strategy = none 11:53:08 policy-pap | metric.reporters = [] 11:53:08 policy-pap | metrics.num.samples = 2 11:53:08 policy-pap | metrics.recording.level = INFO 11:53:08 policy-pap | metrics.sample.window.ms = 30000 11:53:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:08 policy-pap | receive.buffer.bytes = 65536 11:53:08 policy-pap | reconnect.backoff.max.ms = 1000 11:53:08 policy-pap | reconnect.backoff.ms = 50 11:53:08 policy-pap | request.timeout.ms = 30000 11:53:08 policy-pap | retry.backoff.max.ms = 1000 11:53:08 policy-pap | retry.backoff.ms = 100 11:53:08 policy-pap | sasl.client.callback.handler.class = null 11:53:08 policy-pap | sasl.jaas.config = null 11:53:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:08 policy-pap | sasl.kerberos.service.name = null 11:53:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:08 policy-pap | sasl.login.callback.handler.class = null 11:53:08 policy-pap | sasl.login.class = null 11:53:08 policy-pap | sasl.login.connect.timeout.ms = null 11:53:08 policy-pap | sasl.login.read.timeout.ms = null 11:53:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.mechanism = GSSAPI 11:53:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:08 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:08 policy-pap | security.protocol = PLAINTEXT 11:53:08 policy-pap | security.providers = null 11:53:08 policy-pap | send.buffer.bytes = 131072 11:53:08 policy-pap | session.timeout.ms = 45000 11:53:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:08 policy-pap | ssl.cipher.suites = null 11:53:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:08 policy-pap | ssl.engine.factory.class = null 11:53:08 policy-pap | ssl.key.password = null 11:53:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:08 policy-pap | ssl.keystore.certificate.chain = null 11:53:08 policy-pap | ssl.keystore.key = null 11:53:08 policy-pap | ssl.keystore.location = null 11:53:08 policy-pap | ssl.keystore.password = null 11:53:08 policy-pap | ssl.keystore.type = JKS 11:53:08 policy-pap | ssl.protocol = TLSv1.3 11:53:08 policy-pap | ssl.provider = null 11:53:08 policy-pap | ssl.secure.random.implementation = null 11:53:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:08 policy-pap | ssl.truststore.certificates = null 11:53:08 policy-pap | ssl.truststore.location = null 11:53:08 policy-pap | ssl.truststore.password = null 11:53:08 policy-pap | ssl.truststore.type = JKS 11:53:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:33.258+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:08 policy-pap | [2025-06-17T11:47:33.395+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:08 policy-pap | [2025-06-17T11:47:33.395+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:08 policy-pap | [2025-06-17T11:47:33.395+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750160853394 11:53:08 policy-pap | [2025-06-17T11:47:33.397+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-1, groupId=41725656-81de-4f00-877a-6abbfa57a523] Subscribed to topic(s): policy-pdp-pap 11:53:08 policy-pap | [2025-06-17T11:47:33.398+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:08 policy-pap | allow.auto.create.topics = true 11:53:08 policy-pap | auto.commit.interval.ms = 5000 11:53:08 policy-pap | auto.include.jmx.reporter = true 11:53:08 policy-pap | auto.offset.reset = latest 11:53:08 policy-pap | bootstrap.servers = [kafka:9092] 11:53:08 policy-pap | check.crcs = true 11:53:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:08 policy-pap | client.id = consumer-policy-pap-2 11:53:08 policy-pap | client.rack = 11:53:08 policy-pap | connections.max.idle.ms = 540000 11:53:08 policy-pap | default.api.timeout.ms = 60000 11:53:08 policy-pap | enable.auto.commit = true 11:53:08 policy-pap | enable.metrics.push = true 11:53:08 policy-pap | exclude.internal.topics = true 11:53:08 policy-pap | fetch.max.bytes = 52428800 11:53:08 policy-pap | fetch.max.wait.ms = 500 11:53:08 policy-pap | fetch.min.bytes = 1 11:53:08 policy-pap | group.id = policy-pap 11:53:08 policy-pap | group.instance.id = null 11:53:08 policy-pap | group.protocol = classic 11:53:08 policy-pap | group.remote.assignor = null 11:53:08 policy-pap | heartbeat.interval.ms = 3000 11:53:08 policy-pap | interceptor.classes = [] 11:53:08 policy-pap | internal.leave.group.on.close = true 11:53:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:08 policy-pap | isolation.level = read_uncommitted 11:53:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | max.partition.fetch.bytes = 1048576 11:53:08 policy-pap | max.poll.interval.ms = 300000 11:53:08 policy-pap | max.poll.records = 500 11:53:08 policy-pap | metadata.max.age.ms = 300000 11:53:08 policy-pap | metadata.recovery.strategy = none 11:53:08 policy-pap | metric.reporters = [] 11:53:08 policy-pap | metrics.num.samples = 2 11:53:08 policy-pap | metrics.recording.level = INFO 11:53:08 policy-pap | metrics.sample.window.ms = 30000 11:53:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:08 policy-pap | receive.buffer.bytes = 65536 11:53:08 policy-pap | reconnect.backoff.max.ms = 1000 11:53:08 policy-pap | reconnect.backoff.ms = 50 11:53:08 policy-pap | request.timeout.ms = 30000 11:53:08 policy-pap | retry.backoff.max.ms = 1000 11:53:08 policy-pap | retry.backoff.ms = 100 11:53:08 policy-pap | sasl.client.callback.handler.class = null 11:53:08 policy-pap | sasl.jaas.config = null 11:53:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:08 policy-pap | sasl.kerberos.service.name = null 11:53:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:08 policy-pap | sasl.login.callback.handler.class = null 11:53:08 policy-pap | sasl.login.class = null 11:53:08 policy-pap | sasl.login.connect.timeout.ms = null 11:53:08 policy-pap | sasl.login.read.timeout.ms = null 11:53:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.mechanism = GSSAPI 11:53:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:08 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:08 policy-pap | security.protocol = PLAINTEXT 11:53:08 policy-pap | security.providers = null 11:53:08 policy-pap | send.buffer.bytes = 131072 11:53:08 policy-pap | session.timeout.ms = 45000 11:53:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:08 policy-pap | ssl.cipher.suites = null 11:53:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:08 policy-pap | ssl.engine.factory.class = null 11:53:08 policy-pap | ssl.key.password = null 11:53:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:08 policy-pap | ssl.keystore.certificate.chain = null 11:53:08 policy-pap | ssl.keystore.key = null 11:53:08 policy-pap | ssl.keystore.location = null 11:53:08 policy-pap | ssl.keystore.password = null 11:53:08 policy-pap | ssl.keystore.type = JKS 11:53:08 policy-pap | ssl.protocol = TLSv1.3 11:53:08 policy-pap | ssl.provider = null 11:53:08 policy-pap | ssl.secure.random.implementation = null 11:53:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:08 policy-pap | ssl.truststore.certificates = null 11:53:08 policy-pap | ssl.truststore.location = null 11:53:08 policy-pap | ssl.truststore.password = null 11:53:08 policy-pap | ssl.truststore.type = JKS 11:53:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:33.398+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:08 policy-pap | [2025-06-17T11:47:33.405+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:08 policy-pap | [2025-06-17T11:47:33.405+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:08 policy-pap | [2025-06-17T11:47:33.405+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750160853405 11:53:08 policy-pap | [2025-06-17T11:47:33.406+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:53:08 policy-pap | [2025-06-17T11:47:33.730+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 11:53:08 policy-pap | [2025-06-17T11:47:33.845+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:53:08 policy-pap | [2025-06-17T11:47:33.920+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 11:53:08 policy-pap | [2025-06-17T11:47:34.123+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 11:53:08 policy-pap | [2025-06-17T11:47:34.834+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 11:53:08 policy-pap | [2025-06-17T11:47:34.976+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:53:08 policy-pap | [2025-06-17T11:47:34.993+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 11:53:08 policy-pap | [2025-06-17T11:47:35.014+00:00|INFO|ServiceManager|main] Policy PAP starting 11:53:08 policy-pap | [2025-06-17T11:47:35.014+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 11:53:08 policy-pap | [2025-06-17T11:47:35.014+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 11:53:08 policy-pap | [2025-06-17T11:47:35.015+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 11:53:08 policy-pap | [2025-06-17T11:47:35.015+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 11:53:08 policy-pap | [2025-06-17T11:47:35.015+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 11:53:08 policy-pap | [2025-06-17T11:47:35.016+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 11:53:08 policy-pap | [2025-06-17T11:47:35.018+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=41725656-81de-4f00-877a-6abbfa57a523, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@408db405 11:53:08 policy-pap | [2025-06-17T11:47:35.027+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=41725656-81de-4f00-877a-6abbfa57a523, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:08 policy-pap | [2025-06-17T11:47:35.028+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:08 policy-pap | allow.auto.create.topics = true 11:53:08 policy-pap | auto.commit.interval.ms = 5000 11:53:08 policy-pap | auto.include.jmx.reporter = true 11:53:08 policy-pap | auto.offset.reset = latest 11:53:08 policy-pap | bootstrap.servers = [kafka:9092] 11:53:08 policy-pap | check.crcs = true 11:53:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:08 policy-pap | client.id = consumer-41725656-81de-4f00-877a-6abbfa57a523-3 11:53:08 policy-pap | client.rack = 11:53:08 policy-pap | connections.max.idle.ms = 540000 11:53:08 policy-pap | default.api.timeout.ms = 60000 11:53:08 policy-pap | enable.auto.commit = true 11:53:08 policy-pap | enable.metrics.push = true 11:53:08 policy-pap | exclude.internal.topics = true 11:53:08 policy-pap | fetch.max.bytes = 52428800 11:53:08 policy-pap | fetch.max.wait.ms = 500 11:53:08 policy-pap | fetch.min.bytes = 1 11:53:08 policy-pap | group.id = 41725656-81de-4f00-877a-6abbfa57a523 11:53:08 policy-pap | group.instance.id = null 11:53:08 policy-pap | group.protocol = classic 11:53:08 policy-pap | group.remote.assignor = null 11:53:08 policy-pap | heartbeat.interval.ms = 3000 11:53:08 policy-pap | interceptor.classes = [] 11:53:08 policy-pap | internal.leave.group.on.close = true 11:53:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:08 policy-pap | isolation.level = read_uncommitted 11:53:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | max.partition.fetch.bytes = 1048576 11:53:08 policy-pap | max.poll.interval.ms = 300000 11:53:08 policy-pap | max.poll.records = 500 11:53:08 policy-pap | metadata.max.age.ms = 300000 11:53:08 policy-pap | metadata.recovery.strategy = none 11:53:08 policy-pap | metric.reporters = [] 11:53:08 policy-pap | metrics.num.samples = 2 11:53:08 policy-pap | metrics.recording.level = INFO 11:53:08 policy-pap | metrics.sample.window.ms = 30000 11:53:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:08 policy-pap | receive.buffer.bytes = 65536 11:53:08 policy-pap | reconnect.backoff.max.ms = 1000 11:53:08 policy-pap | reconnect.backoff.ms = 50 11:53:08 policy-pap | request.timeout.ms = 30000 11:53:08 policy-pap | retry.backoff.max.ms = 1000 11:53:08 policy-pap | retry.backoff.ms = 100 11:53:08 policy-pap | sasl.client.callback.handler.class = null 11:53:08 policy-pap | sasl.jaas.config = null 11:53:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:08 policy-pap | sasl.kerberos.service.name = null 11:53:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:08 policy-pap | sasl.login.callback.handler.class = null 11:53:08 policy-pap | sasl.login.class = null 11:53:08 policy-pap | sasl.login.connect.timeout.ms = null 11:53:08 policy-pap | sasl.login.read.timeout.ms = null 11:53:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.mechanism = GSSAPI 11:53:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:08 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:08 policy-pap | security.protocol = PLAINTEXT 11:53:08 policy-pap | security.providers = null 11:53:08 policy-pap | send.buffer.bytes = 131072 11:53:08 policy-pap | session.timeout.ms = 45000 11:53:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:08 policy-pap | ssl.cipher.suites = null 11:53:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:08 policy-pap | ssl.engine.factory.class = null 11:53:08 policy-pap | ssl.key.password = null 11:53:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:08 policy-pap | ssl.keystore.certificate.chain = null 11:53:08 policy-pap | ssl.keystore.key = null 11:53:08 policy-pap | ssl.keystore.location = null 11:53:08 policy-pap | ssl.keystore.password = null 11:53:08 policy-pap | ssl.keystore.type = JKS 11:53:08 policy-pap | ssl.protocol = TLSv1.3 11:53:08 policy-pap | ssl.provider = null 11:53:08 policy-pap | ssl.secure.random.implementation = null 11:53:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:08 policy-pap | ssl.truststore.certificates = null 11:53:08 policy-pap | ssl.truststore.location = null 11:53:08 policy-pap | ssl.truststore.password = null 11:53:08 policy-pap | ssl.truststore.type = JKS 11:53:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:35.028+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:08 policy-pap | [2025-06-17T11:47:35.034+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:08 policy-pap | [2025-06-17T11:47:35.035+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:08 policy-pap | [2025-06-17T11:47:35.035+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750160855034 11:53:08 policy-pap | [2025-06-17T11:47:35.035+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Subscribed to topic(s): policy-pdp-pap 11:53:08 policy-pap | [2025-06-17T11:47:35.035+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 11:53:08 policy-pap | [2025-06-17T11:47:35.035+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=742e0326-5367-40de-a875-cab119829693, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7746ae18 11:53:08 policy-pap | [2025-06-17T11:47:35.035+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=742e0326-5367-40de-a875-cab119829693, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:08 policy-pap | [2025-06-17T11:47:35.036+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:53:08 policy-pap | allow.auto.create.topics = true 11:53:08 policy-pap | auto.commit.interval.ms = 5000 11:53:08 policy-pap | auto.include.jmx.reporter = true 11:53:08 policy-pap | auto.offset.reset = latest 11:53:08 policy-pap | bootstrap.servers = [kafka:9092] 11:53:08 policy-pap | check.crcs = true 11:53:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:08 policy-pap | client.id = consumer-policy-pap-4 11:53:08 policy-pap | client.rack = 11:53:08 policy-pap | connections.max.idle.ms = 540000 11:53:08 policy-pap | default.api.timeout.ms = 60000 11:53:08 policy-pap | enable.auto.commit = true 11:53:08 policy-pap | enable.metrics.push = true 11:53:08 policy-pap | exclude.internal.topics = true 11:53:08 policy-pap | fetch.max.bytes = 52428800 11:53:08 policy-pap | fetch.max.wait.ms = 500 11:53:08 policy-pap | fetch.min.bytes = 1 11:53:08 policy-pap | group.id = policy-pap 11:53:08 policy-pap | group.instance.id = null 11:53:08 policy-pap | group.protocol = classic 11:53:08 policy-pap | group.remote.assignor = null 11:53:08 policy-pap | heartbeat.interval.ms = 3000 11:53:08 policy-pap | interceptor.classes = [] 11:53:08 policy-pap | internal.leave.group.on.close = true 11:53:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:53:08 policy-pap | isolation.level = read_uncommitted 11:53:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | max.partition.fetch.bytes = 1048576 11:53:08 policy-pap | max.poll.interval.ms = 300000 11:53:08 policy-pap | max.poll.records = 500 11:53:08 policy-pap | metadata.max.age.ms = 300000 11:53:08 policy-pap | metadata.recovery.strategy = none 11:53:08 policy-pap | metric.reporters = [] 11:53:08 policy-pap | metrics.num.samples = 2 11:53:08 policy-pap | metrics.recording.level = INFO 11:53:08 policy-pap | metrics.sample.window.ms = 30000 11:53:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:53:08 policy-pap | receive.buffer.bytes = 65536 11:53:08 policy-pap | reconnect.backoff.max.ms = 1000 11:53:08 policy-pap | reconnect.backoff.ms = 50 11:53:08 policy-pap | request.timeout.ms = 30000 11:53:08 policy-pap | retry.backoff.max.ms = 1000 11:53:08 policy-pap | retry.backoff.ms = 100 11:53:08 policy-pap | sasl.client.callback.handler.class = null 11:53:08 policy-pap | sasl.jaas.config = null 11:53:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:08 policy-pap | sasl.kerberos.service.name = null 11:53:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:08 policy-pap | sasl.login.callback.handler.class = null 11:53:08 policy-pap | sasl.login.class = null 11:53:08 policy-pap | sasl.login.connect.timeout.ms = null 11:53:08 policy-pap | sasl.login.read.timeout.ms = null 11:53:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.mechanism = GSSAPI 11:53:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:08 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:08 policy-pap | security.protocol = PLAINTEXT 11:53:08 policy-pap | security.providers = null 11:53:08 policy-pap | send.buffer.bytes = 131072 11:53:08 policy-pap | session.timeout.ms = 45000 11:53:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:08 policy-pap | ssl.cipher.suites = null 11:53:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:08 policy-pap | ssl.engine.factory.class = null 11:53:08 policy-pap | ssl.key.password = null 11:53:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:08 policy-pap | ssl.keystore.certificate.chain = null 11:53:08 policy-pap | ssl.keystore.key = null 11:53:08 policy-pap | ssl.keystore.location = null 11:53:08 policy-pap | ssl.keystore.password = null 11:53:08 policy-pap | ssl.keystore.type = JKS 11:53:08 policy-pap | ssl.protocol = TLSv1.3 11:53:08 policy-pap | ssl.provider = null 11:53:08 policy-pap | ssl.secure.random.implementation = null 11:53:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:08 policy-pap | ssl.truststore.certificates = null 11:53:08 policy-pap | ssl.truststore.location = null 11:53:08 policy-pap | ssl.truststore.password = null 11:53:08 policy-pap | ssl.truststore.type = JKS 11:53:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:35.036+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750160855042 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|ServiceManager|main] Policy PAP starting topics 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=742e0326-5367-40de-a875-cab119829693, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=41725656-81de-4f00-877a-6abbfa57a523, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:53:08 policy-pap | [2025-06-17T11:47:35.042+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d70e4ee6-366b-4ebd-aeb1-19136a7533f7, alive=false, publisher=null]]: starting 11:53:08 policy-pap | [2025-06-17T11:47:35.053+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:53:08 policy-pap | acks = -1 11:53:08 policy-pap | auto.include.jmx.reporter = true 11:53:08 policy-pap | batch.size = 16384 11:53:08 policy-pap | bootstrap.servers = [kafka:9092] 11:53:08 policy-pap | buffer.memory = 33554432 11:53:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:08 policy-pap | client.id = producer-1 11:53:08 policy-pap | compression.gzip.level = -1 11:53:08 policy-pap | compression.lz4.level = 9 11:53:08 policy-pap | compression.type = none 11:53:08 policy-pap | compression.zstd.level = 3 11:53:08 policy-pap | connections.max.idle.ms = 540000 11:53:08 policy-pap | delivery.timeout.ms = 120000 11:53:08 policy-pap | enable.idempotence = true 11:53:08 policy-pap | enable.metrics.push = true 11:53:08 policy-pap | interceptor.classes = [] 11:53:08 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:08 policy-pap | linger.ms = 0 11:53:08 policy-pap | max.block.ms = 60000 11:53:08 policy-pap | max.in.flight.requests.per.connection = 5 11:53:08 policy-pap | max.request.size = 1048576 11:53:08 policy-pap | metadata.max.age.ms = 300000 11:53:08 policy-pap | metadata.max.idle.ms = 300000 11:53:08 policy-pap | metadata.recovery.strategy = none 11:53:08 policy-pap | metric.reporters = [] 11:53:08 policy-pap | metrics.num.samples = 2 11:53:08 policy-pap | metrics.recording.level = INFO 11:53:08 policy-pap | metrics.sample.window.ms = 30000 11:53:08 policy-pap | partitioner.adaptive.partitioning.enable = true 11:53:08 policy-pap | partitioner.availability.timeout.ms = 0 11:53:08 policy-pap | partitioner.class = null 11:53:08 policy-pap | partitioner.ignore.keys = false 11:53:08 policy-pap | receive.buffer.bytes = 32768 11:53:08 policy-pap | reconnect.backoff.max.ms = 1000 11:53:08 policy-pap | reconnect.backoff.ms = 50 11:53:08 policy-pap | request.timeout.ms = 30000 11:53:08 policy-pap | retries = 2147483647 11:53:08 policy-pap | retry.backoff.max.ms = 1000 11:53:08 policy-pap | retry.backoff.ms = 100 11:53:08 policy-pap | sasl.client.callback.handler.class = null 11:53:08 policy-pap | sasl.jaas.config = null 11:53:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:08 policy-pap | sasl.kerberos.service.name = null 11:53:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:08 policy-pap | sasl.login.callback.handler.class = null 11:53:08 policy-pap | sasl.login.class = null 11:53:08 policy-pap | sasl.login.connect.timeout.ms = null 11:53:08 policy-pap | sasl.login.read.timeout.ms = null 11:53:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.mechanism = GSSAPI 11:53:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:08 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:08 policy-pap | security.protocol = PLAINTEXT 11:53:08 policy-pap | security.providers = null 11:53:08 policy-pap | send.buffer.bytes = 131072 11:53:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:08 policy-pap | ssl.cipher.suites = null 11:53:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:08 policy-pap | ssl.engine.factory.class = null 11:53:08 policy-pap | ssl.key.password = null 11:53:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:08 policy-pap | ssl.keystore.certificate.chain = null 11:53:08 policy-pap | ssl.keystore.key = null 11:53:08 policy-pap | ssl.keystore.location = null 11:53:08 policy-pap | ssl.keystore.password = null 11:53:08 policy-pap | ssl.keystore.type = JKS 11:53:08 policy-pap | ssl.protocol = TLSv1.3 11:53:08 policy-pap | ssl.provider = null 11:53:08 policy-pap | ssl.secure.random.implementation = null 11:53:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:08 policy-pap | ssl.truststore.certificates = null 11:53:08 policy-pap | ssl.truststore.location = null 11:53:08 policy-pap | ssl.truststore.password = null 11:53:08 policy-pap | ssl.truststore.type = JKS 11:53:08 policy-pap | transaction.timeout.ms = 60000 11:53:08 policy-pap | transactional.id = null 11:53:08 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:35.054+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:08 policy-pap | [2025-06-17T11:47:35.066+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 11:53:08 policy-pap | [2025-06-17T11:47:35.082+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:08 policy-pap | [2025-06-17T11:47:35.082+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:08 policy-pap | [2025-06-17T11:47:35.082+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750160855082 11:53:08 policy-pap | [2025-06-17T11:47:35.082+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d70e4ee6-366b-4ebd-aeb1-19136a7533f7, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:53:08 policy-pap | [2025-06-17T11:47:35.082+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=79275993-78e1-498a-a8d0-7b36af4100d5, alive=false, publisher=null]]: starting 11:53:08 policy-pap | [2025-06-17T11:47:35.083+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:53:08 policy-pap | acks = -1 11:53:08 policy-pap | auto.include.jmx.reporter = true 11:53:08 policy-pap | batch.size = 16384 11:53:08 policy-pap | bootstrap.servers = [kafka:9092] 11:53:08 policy-pap | buffer.memory = 33554432 11:53:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:53:08 policy-pap | client.id = producer-2 11:53:08 policy-pap | compression.gzip.level = -1 11:53:08 policy-pap | compression.lz4.level = 9 11:53:08 policy-pap | compression.type = none 11:53:08 policy-pap | compression.zstd.level = 3 11:53:08 policy-pap | connections.max.idle.ms = 540000 11:53:08 policy-pap | delivery.timeout.ms = 120000 11:53:08 policy-pap | enable.idempotence = true 11:53:08 policy-pap | enable.metrics.push = true 11:53:08 policy-pap | interceptor.classes = [] 11:53:08 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:08 policy-pap | linger.ms = 0 11:53:08 policy-pap | max.block.ms = 60000 11:53:08 policy-pap | max.in.flight.requests.per.connection = 5 11:53:08 policy-pap | max.request.size = 1048576 11:53:08 policy-pap | metadata.max.age.ms = 300000 11:53:08 policy-pap | metadata.max.idle.ms = 300000 11:53:08 policy-pap | metadata.recovery.strategy = none 11:53:08 policy-pap | metric.reporters = [] 11:53:08 policy-pap | metrics.num.samples = 2 11:53:08 policy-pap | metrics.recording.level = INFO 11:53:08 policy-pap | metrics.sample.window.ms = 30000 11:53:08 policy-pap | partitioner.adaptive.partitioning.enable = true 11:53:08 policy-pap | partitioner.availability.timeout.ms = 0 11:53:08 policy-pap | partitioner.class = null 11:53:08 policy-pap | partitioner.ignore.keys = false 11:53:08 policy-pap | receive.buffer.bytes = 32768 11:53:08 policy-pap | reconnect.backoff.max.ms = 1000 11:53:08 policy-pap | reconnect.backoff.ms = 50 11:53:08 policy-pap | request.timeout.ms = 30000 11:53:08 policy-pap | retries = 2147483647 11:53:08 policy-pap | retry.backoff.max.ms = 1000 11:53:08 policy-pap | retry.backoff.ms = 100 11:53:08 policy-pap | sasl.client.callback.handler.class = null 11:53:08 policy-pap | sasl.jaas.config = null 11:53:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:53:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:53:08 policy-pap | sasl.kerberos.service.name = null 11:53:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:53:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:53:08 policy-pap | sasl.login.callback.handler.class = null 11:53:08 policy-pap | sasl.login.class = null 11:53:08 policy-pap | sasl.login.connect.timeout.ms = null 11:53:08 policy-pap | sasl.login.read.timeout.ms = null 11:53:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:53:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:53:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:53:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:53:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.mechanism = GSSAPI 11:53:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:53:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:53:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:53:08 policy-pap | sasl.oauthbearer.header.urlencode = false 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:53:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:53:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:53:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:53:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:53:08 policy-pap | security.protocol = PLAINTEXT 11:53:08 policy-pap | security.providers = null 11:53:08 policy-pap | send.buffer.bytes = 131072 11:53:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:53:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:53:08 policy-pap | ssl.cipher.suites = null 11:53:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:53:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:53:08 policy-pap | ssl.engine.factory.class = null 11:53:08 policy-pap | ssl.key.password = null 11:53:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:53:08 policy-pap | ssl.keystore.certificate.chain = null 11:53:08 policy-pap | ssl.keystore.key = null 11:53:08 policy-pap | ssl.keystore.location = null 11:53:08 policy-pap | ssl.keystore.password = null 11:53:08 policy-pap | ssl.keystore.type = JKS 11:53:08 policy-pap | ssl.protocol = TLSv1.3 11:53:08 policy-pap | ssl.provider = null 11:53:08 policy-pap | ssl.secure.random.implementation = null 11:53:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:53:08 policy-pap | ssl.truststore.certificates = null 11:53:08 policy-pap | ssl.truststore.location = null 11:53:08 policy-pap | ssl.truststore.password = null 11:53:08 policy-pap | ssl.truststore.type = JKS 11:53:08 policy-pap | transaction.timeout.ms = 60000 11:53:08 policy-pap | transactional.id = null 11:53:08 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:53:08 policy-pap | 11:53:08 policy-pap | [2025-06-17T11:47:35.083+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 11:53:08 policy-pap | [2025-06-17T11:47:35.083+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 11:53:08 policy-pap | [2025-06-17T11:47:35.087+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 11:53:08 policy-pap | [2025-06-17T11:47:35.087+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 11:53:08 policy-pap | [2025-06-17T11:47:35.087+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750160855087 11:53:08 policy-pap | [2025-06-17T11:47:35.088+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=79275993-78e1-498a-a8d0-7b36af4100d5, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:53:08 policy-pap | [2025-06-17T11:47:35.088+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 11:53:08 policy-pap | [2025-06-17T11:47:35.088+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 11:53:08 policy-pap | [2025-06-17T11:47:35.089+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 11:53:08 policy-pap | [2025-06-17T11:47:35.090+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 11:53:08 policy-pap | [2025-06-17T11:47:35.094+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 11:53:08 policy-pap | [2025-06-17T11:47:35.094+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 11:53:08 policy-pap | [2025-06-17T11:47:35.094+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 11:53:08 policy-pap | [2025-06-17T11:47:35.095+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 11:53:08 policy-pap | [2025-06-17T11:47:35.095+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 11:53:08 policy-pap | [2025-06-17T11:47:35.096+00:00|INFO|ServiceManager|main] Policy PAP started 11:53:08 policy-pap | [2025-06-17T11:47:35.096+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.67 seconds (process running for 10.219) 11:53:08 policy-pap | [2025-06-17T11:47:35.097+00:00|INFO|TimerManager|Thread-9] timer manager update started 11:53:08 policy-pap | [2025-06-17T11:47:35.490+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: ZaVd10B5QzSHTyht7yX6_w 11:53:08 policy-pap | [2025-06-17T11:47:35.492+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: ZaVd10B5QzSHTyht7yX6_w 11:53:08 policy-pap | [2025-06-17T11:47:35.502+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 11:53:08 policy-pap | [2025-06-17T11:47:35.502+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: ZaVd10B5QzSHTyht7yX6_w 11:53:08 policy-pap | [2025-06-17T11:47:35.517+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 11:53:08 policy-pap | [2025-06-17T11:47:35.518+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 11:53:08 policy-pap | [2025-06-17T11:47:35.533+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:08 policy-pap | [2025-06-17T11:47:35.534+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Cluster ID: ZaVd10B5QzSHTyht7yX6_w 11:53:08 policy-pap | [2025-06-17T11:47:35.660+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 11:53:08 policy-pap | [2025-06-17T11:47:35.672+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:08 policy-pap | [2025-06-17T11:47:35.909+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:08 policy-pap | [2025-06-17T11:47:35.927+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:53:08 policy-pap | [2025-06-17T11:47:36.346+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:53:08 policy-pap | [2025-06-17T11:47:36.347+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:53:08 policy-pap | [2025-06-17T11:47:36.353+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:53:08 policy-pap | [2025-06-17T11:47:36.354+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] (Re-)joining group 11:53:08 policy-pap | [2025-06-17T11:47:36.381+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Request joining group due to: need to re-join with the given member-id: consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b 11:53:08 policy-pap | [2025-06-17T11:47:36.381+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693 11:53:08 policy-pap | [2025-06-17T11:47:36.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] (Re-)joining group 11:53:08 policy-pap | [2025-06-17T11:47:36.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:53:08 policy-pap | [2025-06-17T11:47:39.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Successfully joined group with generation Generation{generationId=1, memberId='consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b', protocol='range'} 11:53:08 policy-pap | [2025-06-17T11:47:39.409+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693', protocol='range'} 11:53:08 policy-pap | [2025-06-17T11:47:39.417+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693=Assignment(partitions=[policy-pdp-pap-0])} 11:53:08 policy-pap | [2025-06-17T11:47:39.417+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Finished assignment for group at generation 1: {consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b=Assignment(partitions=[policy-pdp-pap-0])} 11:53:08 policy-pap | [2025-06-17T11:47:39.482+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Successfully synced group in generation Generation{generationId=1, memberId='consumer-41725656-81de-4f00-877a-6abbfa57a523-3-9654c27e-0d41-49ed-9abe-c41c39af0c3b', protocol='range'} 11:53:08 policy-pap | [2025-06-17T11:47:39.483+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-58de9a3d-1a09-4557-9737-af093407c693', protocol='range'} 11:53:08 policy-pap | [2025-06-17T11:47:39.483+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:53:08 policy-pap | [2025-06-17T11:47:39.484+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:53:08 policy-pap | [2025-06-17T11:47:39.490+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Adding newly assigned partitions: policy-pdp-pap-0 11:53:08 policy-pap | [2025-06-17T11:47:39.490+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 11:53:08 policy-pap | [2025-06-17T11:47:39.508+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 11:53:08 policy-pap | [2025-06-17T11:47:39.508+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Found no committed offset for partition policy-pdp-pap-0 11:53:08 policy-pap | [2025-06-17T11:47:39.525+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:53:08 policy-pap | [2025-06-17T11:47:39.525+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41725656-81de-4f00-877a-6abbfa57a523-3, groupId=41725656-81de-4f00-877a-6abbfa57a523] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:53:08 policy-pap | [2025-06-17T11:47:41.609+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:53:08 policy-pap | [2025-06-17T11:47:41.609+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 11:53:08 policy-pap | [2025-06-17T11:47:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 11:53:08 policy-pap | [2025-06-17T11:49:30.738+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 11:53:08 policy-pap | [] 11:53:08 policy-pap | [2025-06-17T11:49:30.739+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"681a42b2-1b8f-43e7-a089-3d3896aa8d81","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750160970699","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:30.739+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"681a42b2-1b8f-43e7-a089-3d3896aa8d81","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750160970699","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:30.744+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:53:08 policy-pap | [2025-06-17T11:49:31.271+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:49:31.271+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:49:31.271+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:49:31.272+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=ee042dd7-6468-4c99-b1ba-33cca2aa7e33, expireMs=1750161001272] 11:53:08 policy-pap | [2025-06-17T11:49:31.273+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:49:31.273+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=ee042dd7-6468-4c99-b1ba-33cca2aa7e33, expireMs=1750161001272] 11:53:08 policy-pap | [2025-06-17T11:49:31.273+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:49:31.276+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","timestampMs":1750160971249,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.319+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","timestampMs":1750160971249,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.320+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:49:31.322+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","timestampMs":1750160971249,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.322+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:49:31.351+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"65f9852d-b34f-4cbc-b65e-a8b557f4839e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971339","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:31.354+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ee042dd7-6468-4c99-b1ba-33cca2aa7e33 11:53:08 policy-pap | [2025-06-17T11:49:31.354+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee042dd7-6468-4c99-b1ba-33cca2aa7e33","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"65f9852d-b34f-4cbc-b65e-a8b557f4839e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971339","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:31.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:49:31.355+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:49:31.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:49:31.356+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ee042dd7-6468-4c99-b1ba-33cca2aa7e33, expireMs=1750161001272] 11:53:08 policy-pap | [2025-06-17T11:49:31.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:49:31.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:49:31.372+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:49:31.372+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 start publishing next request 11:53:08 policy-pap | [2025-06-17T11:49:31.372+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange starting 11:53:08 policy-pap | [2025-06-17T11:49:31.372+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange starting listener 11:53:08 policy-pap | [2025-06-17T11:49:31.373+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange starting timer 11:53:08 policy-pap | [2025-06-17T11:49:31.373+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:08 policy-pap | [2025-06-17T11:49:31.373+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=1b5e2ed8-aaa0-418a-95ce-396273388e73, expireMs=1750161001373] 11:53:08 policy-pap | [2025-06-17T11:49:31.374+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange starting enqueue 11:53:08 policy-pap | [2025-06-17T11:49:31.374+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=1b5e2ed8-aaa0-418a-95ce-396273388e73, expireMs=1750161001373] 11:53:08 policy-pap | [2025-06-17T11:49:31.374+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange started 11:53:08 policy-pap | [2025-06-17T11:49:31.376+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1b5e2ed8-aaa0-418a-95ce-396273388e73","timestampMs":1750160971250,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.398+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 11:53:08 policy-pap | [2025-06-17T11:49:31.394+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1b5e2ed8-aaa0-418a-95ce-396273388e73","timestampMs":1750160971250,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.399+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 11:53:08 policy-pap | [2025-06-17T11:49:31.404+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"1b5e2ed8-aaa0-418a-95ce-396273388e73","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e30aecee-4a83-4946-bcfd-71dd57bacda4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971385","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:31.404+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1b5e2ed8-aaa0-418a-95ce-396273388e73 11:53:08 policy-pap | [2025-06-17T11:49:31.655+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1b5e2ed8-aaa0-418a-95ce-396273388e73","timestampMs":1750160971250,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.655+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 11:53:08 policy-pap | [2025-06-17T11:49:31.657+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"1b5e2ed8-aaa0-418a-95ce-396273388e73","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e30aecee-4a83-4946-bcfd-71dd57bacda4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971385","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:31.657+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange stopping 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange stopping timer 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=1b5e2ed8-aaa0-418a-95ce-396273388e73, expireMs=1750161001373] 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange stopping listener 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange stopped 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpStateChange successful 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 start publishing next request 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=700dc5f1-1098-43f0-9a6f-e5f8c88a75af, expireMs=1750161001658] 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:49:31.658+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","timestampMs":1750160971649,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:49:31.664+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","timestampMs":1750160971649,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.664+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:49:31.665+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","timestampMs":1750160971649,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:49:31.665+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:49:31.671+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"6daa9b58-ec14-459a-8cfa-91b02f94b2f8","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971662","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:31.671+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"700dc5f1-1098-43f0-9a6f-e5f8c88a75af","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"6daa9b58-ec14-459a-8cfa-91b02f94b2f8","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750160971662","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=700dc5f1-1098-43f0-9a6f-e5f8c88a75af, expireMs=1750161001658] 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:49:31.672+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 700dc5f1-1098-43f0-9a6f-e5f8c88a75af 11:53:08 policy-pap | [2025-06-17T11:49:31.677+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:49:31.677+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:49:35.095+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 11:53:08 policy-pap | [2025-06-17T11:50:01.273+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=ee042dd7-6468-4c99-b1ba-33cca2aa7e33, expireMs=1750161001272] 11:53:08 policy-pap | [2025-06-17T11:50:01.373+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=1b5e2ed8-aaa0-418a-95ce-396273388e73, expireMs=1750161001373] 11:53:08 policy-pap | [2025-06-17T11:50:30.714+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"9e2d1e83-d3d3-4e75-9aa2-4f0ce539bcfb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161030702","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:50:30.715+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:53:08 policy-pap | [2025-06-17T11:50:30.717+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"9e2d1e83-d3d3-4e75-9aa2-4f0ce539bcfb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161030702","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:50:49.020+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:50:49.021+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-8] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 11:53:08 policy-pap | [2025-06-17T11:50:49.022+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering a deploy for policy zoneB 1.0.6 11:53:08 policy-pap | [2025-06-17T11:50:49.023+00:00|INFO|SessionData|http-nio-6969-exec-8] add update opa-34bdbe81-f424-4a91-9535-1955322e40a7 opaGroup opa policies=1 11:53:08 policy-pap | [2025-06-17T11:50:49.023+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group opaGroup 11:53:08 policy-pap | [2025-06-17T11:50:49.024+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group opaGroup 11:53:08 policy-pap | [2025-06-17T11:50:49.041+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-17T11:50:49Z, user=policyadmin)] 11:53:08 policy-pap | [2025-06-17T11:50:49.068+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:50:49.068+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:50:49.068+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:50:49.068+00:00|INFO|TimerManager|http-nio-6969-exec-8] update timer registered Timer [name=4c397d61-9c0d-43de-a88a-03760b13f8d4, expireMs=1750161079068] 11:53:08 policy-pap | [2025-06-17T11:50:49.068+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:50:49.068+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:50:49.069+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=4c397d61-9c0d-43de-a88a-03760b13f8d4, expireMs=1750161079068] 11:53:08 policy-pap | [2025-06-17T11:50:49.071+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4c397d61-9c0d-43de-a88a-03760b13f8d4","timestampMs":1750161049023,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:50:49.080+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4c397d61-9c0d-43de-a88a-03760b13f8d4","timestampMs":1750161049023,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:50:49.080+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4c397d61-9c0d-43de-a88a-03760b13f8d4","timestampMs":1750161049023,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:50:49.080+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:50:49.080+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:50:49.114+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"4c397d61-9c0d-43de-a88a-03760b13f8d4","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"f1752940-7df0-42dd-83eb-8a6f9bffd0e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161049103","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=4c397d61-9c0d-43de-a88a-03760b13f8d4, expireMs=1750161079068] 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:50:49.115+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"4c397d61-9c0d-43de-a88a-03760b13f8d4","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"f1752940-7df0-42dd-83eb-8a6f9bffd0e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161049103","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:50:49.116+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 4c397d61-9c0d-43de-a88a-03760b13f8d4 11:53:08 policy-pap | [2025-06-17T11:50:49.124+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:50:49.124+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:50:49.124+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:08 policy-pap | [2025-06-17T11:51:13.545+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:13.546+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 11:53:08 policy-pap | [2025-06-17T11:51:13.546+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 11:53:08 policy-pap | [2025-06-17T11:51:13.546+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-34bdbe81-f424-4a91-9535-1955322e40a7 opaGroup opa policies=0 11:53:08 policy-pap | [2025-06-17T11:51:13.546+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:13.546+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:13.556+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-17T11:51:13Z, user=policyadmin)] 11:53:08 policy-pap | [2025-06-17T11:51:13.569+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:51:13.569+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:51:13.569+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:51:13.569+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=247f4b5d-dde8-441b-8b74-988c71a54520, expireMs=1750161103569] 11:53:08 policy-pap | [2025-06-17T11:51:13.569+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:51:13.569+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"247f4b5d-dde8-441b-8b74-988c71a54520","timestampMs":1750161073546,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:13.570+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:51:13.577+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"247f4b5d-dde8-441b-8b74-988c71a54520","timestampMs":1750161073546,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:13.577+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:13.580+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"247f4b5d-dde8-441b-8b74-988c71a54520","timestampMs":1750161073546,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:13.580+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:13.587+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"247f4b5d-dde8-441b-8b74-988c71a54520","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"aba334fb-e8e8-44f5-84a4-48ece5224c79","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161073577","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"247f4b5d-dde8-441b-8b74-988c71a54520","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"aba334fb-e8e8-44f5-84a4-48ece5224c79","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161073577","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 247f4b5d-dde8-441b-8b74-988c71a54520 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=247f4b5d-dde8-441b-8b74-988c71a54520, expireMs=1750161103569] 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:51:13.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:51:13.606+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:51:13.606+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:51:13.607+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 11:53:08 policy-pap | [2025-06-17T11:51:13.937+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:13.940+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: zoneB null 11:53:08 policy-pap | [2025-06-17T11:51:13.940+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed 11:53:08 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:08 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:08 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 11:53:08 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 11:53:08 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 11:53:08 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 11:53:08 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 11:53:08 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 11:53:08 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 11:53:08 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 11:53:08 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 11:53:08 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 11:53:08 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:08 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 11:53:08 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 11:53:08 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 11:53:08 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 11:53:08 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 11:53:08 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 11:53:08 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 11:53:08 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 11:53:08 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 11:53:08 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 11:53:08 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 11:53:08 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 11:53:08 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 11:53:08 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 11:53:08 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 11:53:08 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 11:53:08 policy-pap | [2025-06-17T11:51:14.693+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:14.693+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-1] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 11:53:08 policy-pap | [2025-06-17T11:51:14.693+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy vehicle 1.0.6 11:53:08 policy-pap | [2025-06-17T11:51:14.693+00:00|INFO|SessionData|http-nio-6969-exec-1] add update opa-34bdbe81-f424-4a91-9535-1955322e40a7 opaGroup opa policies=1 11:53:08 policy-pap | [2025-06-17T11:51:14.693+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:14.693+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:14.700+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-17T11:51:14Z, user=policyadmin)] 11:53:08 policy-pap | [2025-06-17T11:51:14.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:51:14.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:51:14.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:51:14.708+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=fd848027-65b5-4ee2-97e4-134b66d98a26, expireMs=1750161104708] 11:53:08 policy-pap | [2025-06-17T11:51:14.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:51:14.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:51:14.709+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fd848027-65b5-4ee2-97e4-134b66d98a26","timestampMs":1750161074693,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:14.716+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fd848027-65b5-4ee2-97e4-134b66d98a26","timestampMs":1750161074693,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:14.716+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:14.718+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fd848027-65b5-4ee2-97e4-134b66d98a26","timestampMs":1750161074693,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:14.718+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:14.751+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"fd848027-65b5-4ee2-97e4-134b66d98a26","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"95b70497-bf79-40cc-9b11-cc749944a7db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161074741","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:14.752+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:51:14.752+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:51:14.752+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:51:14.752+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=fd848027-65b5-4ee2-97e4-134b66d98a26, expireMs=1750161104708] 11:53:08 policy-pap | [2025-06-17T11:51:14.752+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:51:14.752+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:51:14.754+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"fd848027-65b5-4ee2-97e4-134b66d98a26","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"95b70497-bf79-40cc-9b11-cc749944a7db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161074741","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:14.754+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id fd848027-65b5-4ee2-97e4-134b66d98a26 11:53:08 policy-pap | [2025-06-17T11:51:14.761+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:51:14.761+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:08 policy-pap | [2025-06-17T11:51:14.761+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:51:19.069+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=4c397d61-9c0d-43de-a88a-03760b13f8d4, expireMs=1750161079068] 11:53:08 policy-pap | [2025-06-17T11:51:31.361+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"459a7479-a768-40b0-831a-da4dd3106f7c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161091349","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:31.362+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"459a7479-a768-40b0-831a-da4dd3106f7c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161091349","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:31.363+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:53:08 policy-pap | [2025-06-17T11:51:35.107+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 11:53:08 policy-pap | [2025-06-17T11:51:39.106+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:39.106+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 11:53:08 policy-pap | [2025-06-17T11:51:39.106+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 11:53:08 policy-pap | [2025-06-17T11:51:39.106+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-34bdbe81-f424-4a91-9535-1955322e40a7 opaGroup opa policies=0 11:53:08 policy-pap | [2025-06-17T11:51:39.106+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:39.106+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:39.113+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-17T11:51:39Z, user=policyadmin)] 11:53:08 policy-pap | [2025-06-17T11:51:39.123+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:51:39.123+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:51:39.124+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:51:39.124+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=3b88478e-4d54-486a-bc73-01095df1d796, expireMs=1750161129124] 11:53:08 policy-pap | [2025-06-17T11:51:39.124+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=3b88478e-4d54-486a-bc73-01095df1d796, expireMs=1750161129124] 11:53:08 policy-pap | [2025-06-17T11:51:39.124+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:51:39.124+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:51:39.124+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"3b88478e-4d54-486a-bc73-01095df1d796","timestampMs":1750161099106,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:39.138+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"3b88478e-4d54-486a-bc73-01095df1d796","timestampMs":1750161099106,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:39.138+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:39.138+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"3b88478e-4d54-486a-bc73-01095df1d796","timestampMs":1750161099106,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:39.138+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"3b88478e-4d54-486a-bc73-01095df1d796","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"bb65420d-8fbc-45ad-a1b7-dff35a189367","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161099137","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"3b88478e-4d54-486a-bc73-01095df1d796","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"bb65420d-8fbc-45ad-a1b7-dff35a189367","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161099137","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3b88478e-4d54-486a-bc73-01095df1d796 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=3b88478e-4d54-486a-bc73-01095df1d796, expireMs=1750161129124] 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:51:39.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:51:39.157+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:51:39.157+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 11:53:08 policy-pap | [2025-06-17T11:51:39.157+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:51:39.483+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:39.483+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-4] failed to undeploy policy: vehicle null 11:53:08 policy-pap | [2025-06-17T11:51:39.483+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-4] undeploy policy failed 11:53:08 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:08 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:08 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 11:53:08 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 11:53:08 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 11:53:08 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 11:53:08 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 11:53:08 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 11:53:08 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 11:53:08 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 11:53:08 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 11:53:08 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 11:53:08 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:08 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 11:53:08 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 11:53:08 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 11:53:08 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 11:53:08 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 11:53:08 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 11:53:08 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 11:53:08 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 11:53:08 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 11:53:08 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 11:53:08 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 11:53:08 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 11:53:08 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 11:53:08 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 11:53:08 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 11:53:08 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 11:53:08 policy-pap | [2025-06-17T11:51:40.194+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:40.194+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy abac 1.0.7 to subgroup opaGroup opa count=2 11:53:08 policy-pap | [2025-06-17T11:51:40.194+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy abac 1.0.7 11:53:08 policy-pap | [2025-06-17T11:51:40.194+00:00|INFO|SessionData|http-nio-6969-exec-3] add update opa-34bdbe81-f424-4a91-9535-1955322e40a7 opaGroup opa policies=1 11:53:08 policy-pap | [2025-06-17T11:51:40.194+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:40.195+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group opaGroup 11:53:08 policy-pap | [2025-06-17T11:51:40.203+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-17T11:51:40Z, user=policyadmin)] 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=c22c4231-ead1-4117-b32a-b0d7b235ec15, expireMs=1750161130211] 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:51:40.211+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c22c4231-ead1-4117-b32a-b0d7b235ec15","timestampMs":1750161100194,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:40.218+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c22c4231-ead1-4117-b32a-b0d7b235ec15","timestampMs":1750161100194,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:40.219+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:40.220+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"c22c4231-ead1-4117-b32a-b0d7b235ec15","timestampMs":1750161100194,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:51:40.223+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:51:40.249+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c22c4231-ead1-4117-b32a-b0d7b235ec15","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e2a15dbb-e3f8-48c1-858f-9747e6e552c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161100239","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:40.250+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c22c4231-ead1-4117-b32a-b0d7b235ec15 11:53:08 policy-pap | [2025-06-17T11:51:40.250+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c22c4231-ead1-4117-b32a-b0d7b235ec15","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"e2a15dbb-e3f8-48c1-858f-9747e6e552c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161100239","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:51:40.251+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:51:40.251+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:51:40.251+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:51:40.252+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c22c4231-ead1-4117-b32a-b0d7b235ec15, expireMs=1750161130211] 11:53:08 policy-pap | [2025-06-17T11:51:40.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:51:40.252+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:51:40.261+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:51:40.261+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:51:40.261+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 11:53:08 policy-pap | [2025-06-17T11:52:04.868+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:52:04.868+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 11:53:08 policy-pap | [2025-06-17T11:52:04.868+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy abac 1.0.7 11:53:08 policy-pap | [2025-06-17T11:52:04.868+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-34bdbe81-f424-4a91-9535-1955322e40a7 opaGroup opa policies=0 11:53:08 policy-pap | [2025-06-17T11:52:04.868+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup 11:53:08 policy-pap | [2025-06-17T11:52:04.868+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup 11:53:08 policy-pap | [2025-06-17T11:52:04.874+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-17T11:52:04Z, user=policyadmin)] 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting listener 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting timer 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=09ce30f1-e49a-4079-9fe9-9622bda9e261, expireMs=1750161154880] 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate starting enqueue 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate started 11:53:08 policy-pap | [2025-06-17T11:52:04.880+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"09ce30f1-e49a-4079-9fe9-9622bda9e261","timestampMs":1750161124868,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:52:04.888+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"09ce30f1-e49a-4079-9fe9-9622bda9e261","timestampMs":1750161124868,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:52:04.888+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:52:04.890+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"source":"pap-28965174-0d2a-4482-9599-2c3383d4bf34","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"09ce30f1-e49a-4079-9fe9-9622bda9e261","timestampMs":1750161124868,"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 11:53:08 policy-pap | [2025-06-17T11:52:04.890+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09ce30f1-e49a-4079-9fe9-9622bda9e261","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"c0d5b3a4-f81f-4a70-8251-27b7727a4894","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161124886","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:53:08 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09ce30f1-e49a-4079-9fe9-9622bda9e261","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-34bdbe81-f424-4a91-9535-1955322e40a7","requestId":"c0d5b3a4-f81f-4a70-8251-27b7727a4894","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750161124886","deploymentInstanceInfo":""} 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 09ce30f1-e49a-4079-9fe9-9622bda9e261 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping enqueue 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping timer 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=09ce30f1-e49a-4079-9fe9-9622bda9e261, expireMs=1750161154880] 11:53:08 policy-pap | [2025-06-17T11:52:04.896+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopping listener 11:53:08 policy-pap | [2025-06-17T11:52:04.897+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate stopped 11:53:08 policy-pap | [2025-06-17T11:52:04.906+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 PdpUpdate successful 11:53:08 policy-pap | [2025-06-17T11:52:04.906+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-34bdbe81-f424-4a91-9535-1955322e40a7 has no more requests 11:53:08 policy-pap | [2025-06-17T11:52:04.906+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 11:53:08 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} 11:53:08 policy-pap | [2025-06-17T11:52:05.207+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 11:53:08 policy-pap | [2025-06-17T11:52:05.208+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: abac null 11:53:08 policy-pap | [2025-06-17T11:52:05.208+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed 11:53:08 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:08 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 11:53:08 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 11:53:08 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 11:53:08 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 11:53:08 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 11:53:08 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:53:08 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 11:53:08 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 11:53:08 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 11:53:08 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 11:53:08 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 11:53:08 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 11:53:08 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 11:53:08 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 11:53:08 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 11:53:08 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 11:53:08 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 11:53:08 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 11:53:08 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 11:53:08 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 11:53:08 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 11:53:08 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:08 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 11:53:08 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 11:53:08 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 11:53:08 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 11:53:08 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 11:53:08 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 11:53:08 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 11:53:08 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 11:53:08 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 11:53:08 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 11:53:08 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 11:53:08 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 11:53:08 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 11:53:08 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 11:53:08 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 11:53:08 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 11:53:08 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 11:53:08 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 11:53:08 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 11:53:08 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 11:53:08 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 11:53:08 policy-pap | [2025-06-17T11:52:09.124+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=3b88478e-4d54-486a-bc73-01095df1d796, expireMs=1750161129124] 11:53:08 postgres | The files belonging to this database system will be owned by user "postgres". 11:53:08 postgres | This user must also own the server process. 11:53:08 postgres | 11:53:08 postgres | The database cluster will be initialized with locale "en_US.utf8". 11:53:08 postgres | The default database encoding has accordingly been set to "UTF8". 11:53:08 postgres | The default text search configuration will be set to "english". 11:53:08 postgres | 11:53:08 postgres | Data page checksums are disabled. 11:53:08 postgres | 11:53:08 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 11:53:08 postgres | creating subdirectories ... ok 11:53:08 postgres | selecting dynamic shared memory implementation ... posix 11:53:08 postgres | selecting default max_connections ... 100 11:53:08 postgres | selecting default shared_buffers ... 128MB 11:53:08 postgres | selecting default time zone ... Etc/UTC 11:53:08 postgres | creating configuration files ... ok 11:53:08 postgres | running bootstrap script ... ok 11:53:08 postgres | performing post-bootstrap initialization ... ok 11:53:08 postgres | syncing data to disk ... ok 11:53:08 postgres | 11:53:08 postgres | 11:53:08 postgres | Success. You can now start the database server using: 11:53:08 postgres | 11:53:08 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 11:53:08 postgres | 11:53:08 postgres | initdb: warning: enabling "trust" authentication for local connections 11:53:08 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 11:53:08 postgres | waiting for server to start....2025-06-17 11:46:57.981 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 11:53:08 postgres | 2025-06-17 11:46:57.985 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 11:53:08 postgres | 2025-06-17 11:46:57.993 UTC [51] LOG: database system was shut down at 2025-06-17 11:46:57 UTC 11:53:08 postgres | 2025-06-17 11:46:58.003 UTC [48] LOG: database system is ready to accept connections 11:53:08 postgres | done 11:53:08 postgres | server started 11:53:08 postgres | 11:53:08 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 11:53:08 postgres | 11:53:08 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 11:53:08 postgres | #!/bin/bash -xv 11:53:08 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 11:53:08 postgres | # 11:53:08 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 11:53:08 postgres | # you may not use this file except in compliance with the License. 11:53:08 postgres | # You may obtain a copy of the License at 11:53:08 postgres | # 11:53:08 postgres | # http://www.apache.org/licenses/LICENSE-2.0 11:53:08 postgres | # 11:53:08 postgres | # Unless required by applicable law or agreed to in writing, software 11:53:08 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 11:53:08 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11:53:08 postgres | # See the License for the specific language governing permissions and 11:53:08 postgres | # limitations under the License. 11:53:08 postgres | 11:53:08 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 11:53:08 postgres | CREATE ROLE 11:53:08 postgres | 11:53:08 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | do 11:53:08 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 11:53:08 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 11:53:08 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 11:53:08 postgres | done 11:53:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 11:53:08 postgres | CREATE DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 11:53:08 postgres | ALTER DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 11:53:08 postgres | GRANT 11:53:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 11:53:08 postgres | CREATE DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 11:53:08 postgres | ALTER DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 11:53:08 postgres | GRANT 11:53:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 11:53:08 postgres | CREATE DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 11:53:08 postgres | ALTER DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 11:53:08 postgres | GRANT 11:53:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 11:53:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 11:53:08 postgres | CREATE DATABASE 11:53:08 postgres | ALTER DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 11:53:08 postgres | GRANT 11:53:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 11:53:08 postgres | CREATE DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 11:53:08 postgres | ALTER DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 11:53:08 postgres | GRANT 11:53:08 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 11:53:08 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 11:53:08 postgres | CREATE DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 11:53:08 postgres | ALTER DATABASE 11:53:08 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 11:53:08 postgres | GRANT 11:53:08 postgres | 11:53:08 postgres | waiting for server to shut down...2025-06-17 11:46:59.302 UTC [48] LOG: received fast shutdown request 11:53:08 postgres | .2025-06-17 11:46:59.304 UTC [48] LOG: aborting any active transactions 11:53:08 postgres | 2025-06-17 11:46:59.305 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 11:53:08 postgres | 2025-06-17 11:46:59.306 UTC [49] LOG: shutting down 11:53:08 postgres | 2025-06-17 11:46:59.308 UTC [49] LOG: checkpoint starting: shutdown immediate 11:53:08 postgres | 2025-06-17 11:46:59.897 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.428 s, sync=0.153 s, total=0.591 s; sync files=1788, longest=0.012 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 11:53:08 postgres | 2025-06-17 11:46:59.908 UTC [48] LOG: database system is shut down 11:53:08 postgres | done 11:53:08 postgres | server stopped 11:53:08 postgres | 11:53:08 postgres | PostgreSQL init process complete; ready for start up. 11:53:08 postgres | 11:53:08 postgres | 2025-06-17 11:47:00.027 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 11:53:08 postgres | 2025-06-17 11:47:00.028 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 11:53:08 postgres | 2025-06-17 11:47:00.028 UTC [1] LOG: listening on IPv6 address "::", port 5432 11:53:08 postgres | 2025-06-17 11:47:00.030 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 11:53:08 postgres | 2025-06-17 11:47:00.040 UTC [101] LOG: database system was shut down at 2025-06-17 11:46:59 UTC 11:53:08 postgres | 2025-06-17 11:47:00.117 UTC [1] LOG: database system is ready to accept connections 11:53:08 postgres | 2025-06-17 11:52:00.103 UTC [99] LOG: checkpoint starting: time 11:53:08 postgres | 2025-06-17 11:53:05.045 UTC [99] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.915 s, sync=0.018 s, total=64.943 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/31502E0, redo lsn=0/314DDE0 11:53:09 prometheus | time=2025-06-17T11:46:55.740Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 11:53:09 prometheus | time=2025-06-17T11:46:55.740Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 11:53:09 prometheus | time=2025-06-17T11:46:55.740Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 11:53:09 prometheus | time=2025-06-17T11:46:55.741Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 11:53:09 prometheus | time=2025-06-17T11:46:55.743Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 11:53:09 prometheus | time=2025-06-17T11:46:55.744Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 11:53:09 prometheus | time=2025-06-17T11:46:55.748Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 11:53:09 prometheus | time=2025-06-17T11:46:55.748Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 11:53:09 prometheus | time=2025-06-17T11:46:55.751Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 11:53:09 prometheus | time=2025-06-17T11:46:55.751Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.03µs 11:53:09 prometheus | time=2025-06-17T11:46:55.751Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 11:53:09 prometheus | time=2025-06-17T11:46:55.752Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=504.784µs 11:53:09 prometheus | time=2025-06-17T11:46:55.752Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=22.201µs wal_replay_duration=719.665µs wbl_replay_duration=290ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.03µs total_replay_duration=781.496µs 11:53:09 prometheus | time=2025-06-17T11:46:55.759Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 11:53:09 prometheus | time=2025-06-17T11:46:55.759Z level=INFO source=main.go:1290 msg="TSDB started" 11:53:09 prometheus | time=2025-06-17T11:46:55.759Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 11:53:09 prometheus | time=2025-06-17T11:46:55.761Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 11:53:09 prometheus | time=2025-06-17T11:46:55.761Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.69µs remote_storage=2.33µs web_handler=700ns query_engine=1.27µs scrape=312.072µs scrape_sd=339.903µs notify=194.392µs notify_sd=32.35µs rules=2.42µs tracing=7.37µs filename=/etc/prometheus/prometheus.yml totalDuration=1.838756ms 11:53:09 prometheus | time=2025-06-17T11:46:55.761Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 11:53:09 prometheus | time=2025-06-17T11:46:55.761Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 11:53:09 zookeeper | ===> User 11:53:09 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:53:09 zookeeper | ===> Configuring ... 11:53:09 zookeeper | ===> Running preflight checks ... 11:53:09 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 11:53:09 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 11:53:09 zookeeper | ===> Launching ... 11:53:09 zookeeper | ===> Launching zookeeper ... 11:53:09 zookeeper | [2025-06-17 11:46:57,763] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,766] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,767] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,767] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,767] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,768] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 11:53:09 zookeeper | [2025-06-17 11:46:57,768] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 11:53:09 zookeeper | [2025-06-17 11:46:57,768] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 11:53:09 zookeeper | [2025-06-17 11:46:57,768] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 11:53:09 zookeeper | [2025-06-17 11:46:57,769] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 11:53:09 zookeeper | [2025-06-17 11:46:57,769] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,770] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,770] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,770] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,770] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:53:09 zookeeper | [2025-06-17 11:46:57,770] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 11:53:09 zookeeper | [2025-06-17 11:46:57,780] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 11:53:09 zookeeper | [2025-06-17 11:46:57,782] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:53:09 zookeeper | [2025-06-17 11:46:57,782] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:53:09 zookeeper | [2025-06-17 11:46:57,784] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,791] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,792] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,792] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,792] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,792] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,792] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,793] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,794] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,794] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 11:53:09 zookeeper | [2025-06-17 11:46:57,795] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,795] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,797] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:53:09 zookeeper | [2025-06-17 11:46:57,797] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:53:09 zookeeper | [2025-06-17 11:46:57,798] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:09 zookeeper | [2025-06-17 11:46:57,798] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:09 zookeeper | [2025-06-17 11:46:57,798] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:09 zookeeper | [2025-06-17 11:46:57,798] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:09 zookeeper | [2025-06-17 11:46:57,798] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:09 zookeeper | [2025-06-17 11:46:57,798] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:53:09 zookeeper | [2025-06-17 11:46:57,800] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,800] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,800] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 11:53:09 zookeeper | [2025-06-17 11:46:57,800] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 11:53:09 zookeeper | [2025-06-17 11:46:57,800] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,824] INFO Logging initialized @429ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 11:53:09 zookeeper | [2025-06-17 11:46:57,880] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 11:53:09 zookeeper | [2025-06-17 11:46:57,881] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 11:53:09 zookeeper | [2025-06-17 11:46:57,896] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 11:53:09 zookeeper | [2025-06-17 11:46:57,931] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 11:53:09 zookeeper | [2025-06-17 11:46:57,931] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 11:53:09 zookeeper | [2025-06-17 11:46:57,932] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 11:53:09 zookeeper | [2025-06-17 11:46:57,935] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 11:53:09 zookeeper | [2025-06-17 11:46:57,943] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 11:53:09 zookeeper | [2025-06-17 11:46:57,953] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 11:53:09 zookeeper | [2025-06-17 11:46:57,953] INFO Started @562ms (org.eclipse.jetty.server.Server) 11:53:09 zookeeper | [2025-06-17 11:46:57,953] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,958] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 11:53:09 zookeeper | [2025-06-17 11:46:57,959] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 11:53:09 zookeeper | [2025-06-17 11:46:57,961] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:53:09 zookeeper | [2025-06-17 11:46:57,962] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:53:09 zookeeper | [2025-06-17 11:46:57,973] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:53:09 zookeeper | [2025-06-17 11:46:57,973] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:53:09 zookeeper | [2025-06-17 11:46:57,974] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 11:53:09 zookeeper | [2025-06-17 11:46:57,974] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 11:53:09 zookeeper | [2025-06-17 11:46:57,978] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 11:53:09 zookeeper | [2025-06-17 11:46:57,978] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:53:09 zookeeper | [2025-06-17 11:46:57,981] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 11:53:09 zookeeper | [2025-06-17 11:46:57,981] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:53:09 zookeeper | [2025-06-17 11:46:57,983] INFO Snapshot taken in 2 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:53:09 zookeeper | [2025-06-17 11:46:57,996] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 11:53:09 zookeeper | [2025-06-17 11:46:57,997] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 11:53:09 zookeeper | [2025-06-17 11:46:58,014] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 11:53:09 zookeeper | [2025-06-17 11:46:58,015] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 11:53:09 zookeeper | [2025-06-17 11:46:59,139] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 11:53:09 Tearing down containers... 11:53:09 Container policy-csit Stopping 11:53:09 Container grafana Stopping 11:53:09 Container policy-opa-pdp Stopping 11:53:09 Container policy-csit Stopped 11:53:09 Container policy-csit Removing 11:53:09 Container policy-csit Removed 11:53:09 Container grafana Stopped 11:53:09 Container grafana Removing 11:53:09 Container grafana Removed 11:53:09 Container prometheus Stopping 11:53:10 Container prometheus Stopped 11:53:10 Container prometheus Removing 11:53:10 Container prometheus Removed 11:53:19 Container policy-opa-pdp Stopped 11:53:19 Container policy-opa-pdp Removing 11:53:19 Container policy-opa-pdp Removed 11:53:19 Container policy-pap Stopping 11:53:30 Container policy-pap Stopped 11:53:30 Container policy-pap Removing 11:53:30 Container policy-pap Removed 11:53:30 Container kafka Stopping 11:53:30 Container policy-api Stopping 11:53:31 Container kafka Stopped 11:53:31 Container kafka Removing 11:53:31 Container kafka Removed 11:53:31 Container zookeeper Stopping 11:53:31 Container zookeeper Stopped 11:53:31 Container zookeeper Removing 11:53:31 Container zookeeper Removed 11:53:40 Container policy-api Stopped 11:53:40 Container policy-api Removing 11:53:40 Container policy-api Removed 11:53:40 Container policy-db-migrator Stopping 11:53:40 Container policy-db-migrator Stopped 11:53:40 Container policy-db-migrator Removing 11:53:40 Container policy-db-migrator Removed 11:53:40 Container postgres Stopping 11:53:41 Container postgres Stopped 11:53:41 Container postgres Removing 11:53:41 Container postgres Removed 11:53:41 Network compose_default Removing 11:53:41 Network compose_default Removed 11:53:41 $ ssh-agent -k 11:53:41 unset SSH_AUTH_SOCK; 11:53:41 unset SSH_AGENT_PID; 11:53:41 echo Agent pid 2043 killed; 11:53:41 [ssh-agent] Stopped. 11:53:41 Robot results publisher started... 11:53:41 INFO: Checking test criticality is deprecated and will be dropped in a future release! 11:53:41 -Parsing output xml: 11:53:41 Done! 11:53:41 -Copying log files to build dir: 11:53:41 Done! 11:53:41 -Assigning results to build: 11:53:41 Done! 11:53:41 -Checking thresholds: 11:53:41 Done! 11:53:41 Done publishing Robot results. 11:53:41 [PostBuildScript] - [INFO] Executing post build scripts. 11:53:41 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins9058908966299366874.sh 11:53:41 ---> sysstat.sh 11:53:42 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins6882272282641853316.sh 11:53:42 ---> package-listing.sh 11:53:42 ++ facter osfamily 11:53:42 ++ tr '[:upper:]' '[:lower:]' 11:53:42 + OS_FAMILY=debian 11:53:42 + workspace=/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp 11:53:42 + START_PACKAGES=/tmp/packages_start.txt 11:53:42 + END_PACKAGES=/tmp/packages_end.txt 11:53:42 + DIFF_PACKAGES=/tmp/packages_diff.txt 11:53:42 + PACKAGES=/tmp/packages_start.txt 11:53:42 + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' 11:53:42 + PACKAGES=/tmp/packages_end.txt 11:53:42 + case "${OS_FAMILY}" in 11:53:42 + dpkg -l 11:53:42 + grep '^ii' 11:53:42 + '[' -f /tmp/packages_start.txt ']' 11:53:42 + '[' -f /tmp/packages_end.txt ']' 11:53:42 + diff /tmp/packages_start.txt /tmp/packages_end.txt 11:53:42 + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' 11:53:42 + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ 11:53:42 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ 11:53:42 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins13583284706803250616.sh 11:53:42 ---> capture-instance-metadata.sh 11:53:42 Setup pyenv: 11:53:42 system 11:53:42 3.8.13 11:53:42 3.9.13 11:53:42 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:53:42 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dI98 from file:/tmp/.os_lf_venv 11:53:44 lf-activate-venv(): INFO: Installing: lftools 11:53:53 lf-activate-venv(): INFO: Adding /tmp/venv-dI98/bin to PATH 11:53:53 INFO: Running in OpenStack, capturing instance metadata 11:53:54 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins11963103445351775571.sh 11:53:54 provisioning config files... 11:53:54 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/config4408332842108436988tmp 11:53:54 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 11:53:54 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 11:53:54 [EnvInject] - Injecting environment variables from a build step. 11:53:54 [EnvInject] - Injecting as environment variables the properties content 11:53:54 SERVER_ID=logs 11:53:54 11:53:54 [EnvInject] - Variables injected successfully. 11:53:54 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins7808699052478701735.sh 11:53:54 ---> create-netrc.sh 11:53:54 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins3779333959383352689.sh 11:53:54 ---> python-tools-install.sh 11:53:54 Setup pyenv: 11:53:54 system 11:53:54 3.8.13 11:53:54 3.9.13 11:53:54 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:53:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dI98 from file:/tmp/.os_lf_venv 11:53:56 lf-activate-venv(): INFO: Installing: lftools 11:54:04 lf-activate-venv(): INFO: Adding /tmp/venv-dI98/bin to PATH 11:54:04 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins15910383048911276784.sh 11:54:04 ---> sudo-logs.sh 11:54:04 Archiving 'sudo' log.. 11:54:04 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins17406865056293485880.sh 11:54:04 ---> job-cost.sh 11:54:04 Setup pyenv: 11:54:04 system 11:54:04 3.8.13 11:54:04 3.9.13 11:54:04 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:54:04 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dI98 from file:/tmp/.os_lf_venv 11:54:06 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 11:54:11 lf-activate-venv(): INFO: Adding /tmp/venv-dI98/bin to PATH 11:54:11 INFO: No Stack... 11:54:12 INFO: Retrieving Pricing Info for: v3-standard-8 11:54:12 INFO: Archiving Costs 11:54:12 [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash -l /tmp/jenkins9097801808069504476.sh 11:54:12 ---> logs-deploy.sh 11:54:12 Setup pyenv: 11:54:12 system 11:54:12 3.8.13 11:54:12 3.9.13 11:54:12 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) 11:54:12 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dI98 from file:/tmp/.os_lf_venv 11:54:14 lf-activate-venv(): INFO: Installing: lftools 11:54:22 lf-activate-venv(): INFO: Adding /tmp/venv-dI98/bin to PATH 11:54:22 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-policy-opa-pdp/180 11:54:22 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 11:54:23 Archives upload complete. 11:54:23 INFO: archiving logs to Nexus 11:54:24 ---> uname -a: 11:54:24 Linux prd-ubuntu1804-docker-8c-8g-21811 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 11:54:24 11:54:24 11:54:24 ---> lscpu: 11:54:24 Architecture: x86_64 11:54:24 CPU op-mode(s): 32-bit, 64-bit 11:54:24 Byte Order: Little Endian 11:54:24 CPU(s): 8 11:54:24 On-line CPU(s) list: 0-7 11:54:24 Thread(s) per core: 1 11:54:24 Core(s) per socket: 1 11:54:24 Socket(s): 8 11:54:24 NUMA node(s): 1 11:54:24 Vendor ID: AuthenticAMD 11:54:24 CPU family: 23 11:54:24 Model: 49 11:54:24 Model name: AMD EPYC-Rome Processor 11:54:24 Stepping: 0 11:54:24 CPU MHz: 2799.998 11:54:24 BogoMIPS: 5599.99 11:54:24 Virtualization: AMD-V 11:54:24 Hypervisor vendor: KVM 11:54:24 Virtualization type: full 11:54:24 L1d cache: 32K 11:54:24 L1i cache: 32K 11:54:24 L2 cache: 512K 11:54:24 L3 cache: 16384K 11:54:24 NUMA node0 CPU(s): 0-7 11:54:24 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 11:54:24 11:54:24 11:54:24 ---> nproc: 11:54:24 8 11:54:24 11:54:24 11:54:24 ---> df -h: 11:54:24 Filesystem Size Used Avail Use% Mounted on 11:54:24 udev 16G 0 16G 0% /dev 11:54:24 tmpfs 3.2G 708K 3.2G 1% /run 11:54:24 /dev/vda1 155G 15G 141G 10% / 11:54:24 tmpfs 16G 0 16G 0% /dev/shm 11:54:24 tmpfs 5.0M 0 5.0M 0% /run/lock 11:54:24 tmpfs 16G 0 16G 0% /sys/fs/cgroup 11:54:24 /dev/vda15 105M 4.4M 100M 5% /boot/efi 11:54:24 tmpfs 3.2G 0 3.2G 0% /run/user/1001 11:54:24 11:54:24 11:54:24 ---> free -m: 11:54:24 total used free shared buff/cache available 11:54:24 Mem: 32167 874 24057 0 7235 30837 11:54:24 Swap: 1023 0 1023 11:54:24 11:54:24 11:54:24 ---> ip addr: 11:54:24 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 11:54:24 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 11:54:24 inet 127.0.0.1/8 scope host lo 11:54:24 valid_lft forever preferred_lft forever 11:54:24 inet6 ::1/128 scope host 11:54:24 valid_lft forever preferred_lft forever 11:54:24 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 11:54:24 link/ether fa:16:3e:7e:25:08 brd ff:ff:ff:ff:ff:ff 11:54:24 inet 10.30.107.65/23 brd 10.30.107.255 scope global dynamic ens3 11:54:24 valid_lft 85813sec preferred_lft 85813sec 11:54:24 inet6 fe80::f816:3eff:fe7e:2508/64 scope link 11:54:24 valid_lft forever preferred_lft forever 11:54:24 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 11:54:24 link/ether 02:42:b7:ef:e2:f0 brd ff:ff:ff:ff:ff:ff 11:54:24 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 11:54:24 valid_lft forever preferred_lft forever 11:54:24 inet6 fe80::42:b7ff:feef:e2f0/64 scope link 11:54:24 valid_lft forever preferred_lft forever 11:54:24 11:54:24 11:54:24 ---> sar -b -r -n DEV: 11:54:24 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21811) 06/17/25 _x86_64_ (8 CPU) 11:54:24 11:54:24 11:44:40 LINUX RESTART (8 CPU) 11:54:24 11:54:24 11:45:02 tps rtps wtps bread/s bwrtn/s 11:54:24 11:46:01 355.01 59.02 295.98 3729.00 109762.68 11:54:24 11:47:01 729.56 23.03 706.53 2683.77 254324.69 11:54:24 11:48:01 96.65 0.05 96.60 3.73 7336.64 11:54:24 11:49:01 3.93 0.00 3.93 0.00 101.58 11:54:24 11:50:01 6.67 0.13 6.53 21.06 154.11 11:54:24 11:51:01 218.46 0.27 218.20 20.00 33863.42 11:54:24 11:52:01 9.10 0.00 9.10 0.00 200.20 11:54:24 11:53:01 13.01 0.00 13.01 0.00 300.22 11:54:24 11:54:01 58.62 1.27 57.36 100.12 1295.92 11:54:24 Average: 165.32 9.22 156.11 723.09 45143.34 11:54:24 11:54:24 11:45:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:54:24 11:46:01 30162508 31718880 2776712 8.43 68840 1798500 1374616 4.04 831776 1656096 159984 11:54:24 11:47:01 24639272 31137592 8299948 25.20 156732 6432492 5970960 17.57 1644204 6034252 4856 11:54:24 11:48:01 23438964 30103372 9500256 28.84 163400 6594160 7375580 21.70 2765132 6090240 272 11:54:24 11:49:01 23421956 30077156 9517264 28.89 163564 6585268 7570840 22.28 2790620 6080648 452 11:54:24 11:50:01 23373344 30035988 9565876 29.04 163852 6592372 7620004 22.42 2834276 6082996 5368 11:54:24 11:51:01 22761208 29957368 10178012 30.90 204204 7032028 7913504 23.28 3026924 6443880 2108 11:54:24 11:52:01 22747060 29944328 10192160 30.94 204312 7032648 7956196 23.41 3045200 6438368 344 11:54:24 11:53:01 22752764 29950368 10186456 30.93 204392 7032836 7931720 23.34 3039540 6437860 36 11:54:24 11:54:01 24608164 31546684 8331056 25.29 205804 6769180 1767072 5.20 1507724 6192336 27444 11:54:24 Average: 24211693 30496860 8727527 26.50 170567 6207720 6164499 18.14 2387266 5717408 22318 11:54:24 11:54:24 11:45:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:54:24 11:46:01 lo 1.83 1.83 0.21 0.21 0.00 0.00 0.00 0.00 11:54:24 11:46:01 ens3 502.25 358.70 1718.27 83.66 0.00 0.00 0.00 0.00 11:54:24 11:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:24 11:47:01 veth883482a 0.00 0.18 0.00 0.01 0.00 0.00 0.00 0.00 11:54:24 11:47:01 veth3272a6e 2.77 3.90 0.44 0.42 0.00 0.00 0.00 0.00 11:54:24 11:47:01 veth5652749 3.80 2.98 0.42 0.46 0.00 0.00 0.00 0.00 11:54:24 11:47:01 lo 13.33 13.33 1.21 1.21 0.00 0.00 0.00 0.00 11:54:24 11:48:01 veth883482a 45.13 56.39 3.49 315.67 0.00 0.00 0.00 0.03 11:54:24 11:48:01 veth3272a6e 147.63 169.06 27.43 26.32 0.00 0.00 0.00 0.00 11:54:24 11:48:01 lo 1.60 1.60 0.13 0.13 0.00 0.00 0.00 0.00 11:54:24 11:48:01 vethf4f958b 4.83 5.80 0.76 0.84 0.00 0.00 0.00 0.00 11:54:24 11:49:01 veth883482a 0.50 0.33 0.03 0.02 0.00 0.00 0.00 0.00 11:54:24 11:49:01 veth3272a6e 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:54:24 11:49:01 lo 1.60 1.60 0.12 0.12 0.00 0.00 0.00 0.00 11:54:24 11:49:01 vethf4f958b 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 11:54:24 11:50:01 veth883482a 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 11:54:24 11:50:01 veth3272a6e 98.38 98.85 24.73 11.20 0.00 0.00 0.00 0.00 11:54:24 11:50:01 lo 1.60 1.60 0.13 0.13 0.00 0.00 0.00 0.00 11:54:24 11:50:01 vethf4f958b 0.35 0.57 0.04 0.06 0.00 0.00 0.00 0.00 11:54:24 11:51:01 veth883482a 0.00 0.07 0.00 0.00 0.00 0.00 0.00 0.00 11:54:24 11:51:01 veth3272a6e 165.57 166.39 40.94 18.17 0.00 0.00 0.00 0.00 11:54:24 11:51:01 lo 2.13 2.13 0.17 0.17 0.00 0.00 0.00 0.00 11:54:24 11:51:01 vethf4f958b 0.17 0.38 0.01 0.03 0.00 0.00 0.00 0.00 11:54:24 11:52:01 veth883482a 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:54:24 11:52:01 veth3272a6e 544.74 547.17 132.47 58.95 0.00 0.00 0.00 0.01 11:54:24 11:52:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:54:24 11:52:01 vethf4f958b 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 11:54:24 11:53:01 veth883482a 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:24 11:53:01 veth3272a6e 139.06 139.66 33.61 14.96 0.00 0.00 0.00 0.00 11:54:24 11:53:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:54:24 11:53:01 vethf4f958b 0.17 0.33 0.01 0.02 0.00 0.00 0.00 0.00 11:54:24 11:54:01 lo 2.67 2.67 0.25 0.25 0.00 0.00 0.00 0.00 11:54:24 11:54:01 ens3 2111.71 1375.39 37422.94 203.40 0.00 0.00 0.00 0.00 11:54:24 11:54:01 docker0 142.98 197.58 9.07 1349.11 0.00 0.00 0.00 0.00 11:54:24 Average: lo 3.04 3.04 0.27 0.27 0.00 0.00 0.00 0.00 11:54:24 Average: ens3 232.74 151.60 4157.67 22.49 0.00 0.00 0.00 0.00 11:54:24 Average: docker0 15.92 21.99 1.01 150.17 0.00 0.00 0.00 0.00 11:54:24 11:54:24 11:54:24 ---> sar -P ALL: 11:54:24 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21811) 06/17/25 _x86_64_ (8 CPU) 11:54:24 11:54:24 11:44:40 LINUX RESTART (8 CPU) 11:54:24 11:54:24 11:45:02 CPU %user %nice %system %iowait %steal %idle 11:54:24 11:46:01 all 10.88 0.00 1.10 3.68 0.03 84.31 11:54:24 11:46:01 0 5.11 0.00 0.88 4.09 0.03 89.88 11:54:24 11:46:01 1 29.40 0.00 2.33 1.44 0.05 66.77 11:54:24 11:46:01 2 5.45 0.00 0.67 0.64 0.02 93.22 11:54:24 11:46:01 3 17.29 0.00 1.43 1.51 0.05 79.72 11:54:24 11:46:01 4 3.71 0.00 0.73 7.62 0.03 87.91 11:54:24 11:46:01 5 8.74 0.00 0.70 0.34 0.02 90.21 11:54:24 11:46:01 6 4.37 0.00 1.08 13.07 0.03 81.45 11:54:24 11:46:01 7 13.01 0.00 1.00 0.78 0.05 85.16 11:54:24 11:47:01 all 19.56 0.00 7.70 9.83 0.07 62.84 11:54:24 11:47:01 0 18.07 0.00 6.84 3.46 0.05 71.58 11:54:24 11:47:01 1 16.93 0.00 7.15 3.62 0.07 72.24 11:54:24 11:47:01 2 19.14 0.00 6.65 4.31 0.07 69.83 11:54:24 11:47:01 3 17.40 0.00 9.76 28.50 0.09 44.26 11:54:24 11:47:01 4 16.10 0.00 8.13 16.60 0.07 59.10 11:54:24 11:47:01 5 32.97 0.00 8.70 7.42 0.08 50.82 11:54:24 11:47:01 6 18.48 0.00 7.26 3.23 0.05 70.97 11:54:24 11:47:01 7 17.42 0.00 7.07 11.71 0.05 63.75 11:54:24 11:48:01 all 21.22 0.00 2.35 0.82 0.07 75.54 11:54:24 11:48:01 0 21.19 0.00 2.17 2.72 0.07 73.86 11:54:24 11:48:01 1 21.59 0.00 2.17 0.38 0.05 75.81 11:54:24 11:48:01 2 15.06 0.00 2.13 0.84 0.07 81.90 11:54:24 11:48:01 3 21.77 0.00 2.41 0.95 0.07 74.80 11:54:24 11:48:01 4 30.39 0.00 2.90 0.25 0.08 66.38 11:54:24 11:48:01 5 26.34 0.00 2.77 0.38 0.07 70.43 11:54:24 11:48:01 6 15.40 0.00 1.83 0.65 0.08 82.03 11:54:24 11:48:01 7 17.93 0.00 2.48 0.37 0.10 79.12 11:54:24 11:49:01 all 0.79 0.00 0.14 0.02 0.04 99.01 11:54:24 11:49:01 0 0.80 0.00 0.22 0.08 0.03 98.87 11:54:24 11:49:01 1 0.63 0.00 0.10 0.00 0.03 99.23 11:54:24 11:49:01 2 0.91 0.00 0.10 0.00 0.05 98.94 11:54:24 11:49:01 3 0.52 0.00 0.12 0.00 0.05 99.32 11:54:24 11:49:01 4 1.40 0.00 0.12 0.02 0.02 98.45 11:54:24 11:49:01 5 0.60 0.00 0.12 0.00 0.02 99.27 11:54:24 11:49:01 6 0.88 0.00 0.13 0.02 0.03 98.93 11:54:24 11:49:01 7 0.60 0.00 0.22 0.03 0.03 99.12 11:54:24 11:50:01 all 1.71 0.00 0.31 0.03 0.04 97.92 11:54:24 11:50:01 0 1.02 0.00 0.32 0.15 0.03 98.48 11:54:24 11:50:01 1 2.34 0.00 0.42 0.02 0.05 97.18 11:54:24 11:50:01 2 0.84 0.00 0.27 0.00 0.03 98.86 11:54:24 11:50:01 3 1.82 0.00 0.43 0.02 0.03 97.70 11:54:24 11:50:01 4 1.70 0.00 0.20 0.02 0.03 98.05 11:54:24 11:50:01 5 1.93 0.00 0.38 0.02 0.03 97.63 11:54:24 11:50:01 6 2.17 0.00 0.23 0.02 0.05 97.53 11:54:24 11:50:01 7 1.88 0.00 0.22 0.00 0.03 97.87 11:54:24 11:51:01 all 9.10 0.00 2.60 1.71 0.06 86.53 11:54:24 11:51:01 0 7.32 0.00 2.73 1.22 0.05 88.68 11:54:24 11:51:01 1 11.55 0.00 3.10 6.02 0.05 79.28 11:54:24 11:51:01 2 6.96 0.00 1.84 0.54 0.07 90.59 11:54:24 11:51:01 3 11.06 0.00 2.24 0.17 0.05 86.48 11:54:24 11:51:01 4 7.05 0.00 2.59 0.35 0.07 89.94 11:54:24 11:51:01 5 8.22 0.00 2.08 3.48 0.07 86.15 11:54:24 11:51:01 6 12.34 0.00 2.36 1.51 0.05 83.73 11:54:24 11:51:01 7 8.27 0.00 3.81 0.42 0.07 87.44 11:54:24 11:52:01 all 3.31 0.00 0.58 0.06 0.04 96.01 11:54:24 11:52:01 0 3.25 0.00 0.45 0.00 0.03 96.26 11:54:24 11:52:01 1 3.37 0.00 0.47 0.00 0.05 96.11 11:54:24 11:52:01 2 3.14 0.00 0.33 0.00 0.03 96.49 11:54:24 11:52:01 3 2.50 0.00 1.05 0.02 0.03 96.39 11:54:24 11:52:01 4 4.31 0.00 0.55 0.00 0.05 95.09 11:54:24 11:52:01 5 3.37 0.00 0.45 0.02 0.03 96.13 11:54:24 11:52:01 6 2.37 0.00 0.55 0.18 0.05 96.85 11:54:24 11:52:01 7 4.19 0.00 0.77 0.23 0.07 94.74 11:54:24 11:53:01 all 1.25 0.00 0.27 0.05 0.04 98.39 11:54:24 11:53:01 0 1.03 0.00 0.27 0.02 0.05 98.63 11:54:24 11:53:01 1 1.69 0.00 0.25 0.00 0.03 98.02 11:54:24 11:53:01 2 0.53 0.00 0.07 0.00 0.03 99.37 11:54:24 11:53:01 3 1.74 0.00 0.27 0.00 0.03 97.96 11:54:24 11:53:01 4 1.60 0.00 0.33 0.05 0.03 97.98 11:54:24 11:53:01 5 0.92 0.00 0.23 0.00 0.03 98.82 11:54:24 11:53:01 6 1.22 0.00 0.37 0.27 0.03 98.12 11:54:24 11:53:01 7 1.25 0.00 0.38 0.07 0.07 98.23 11:54:24 11:54:01 all 4.98 0.00 0.78 0.27 0.03 93.93 11:54:24 11:54:01 0 1.98 0.00 0.68 0.05 0.02 97.27 11:54:24 11:54:01 1 1.20 0.00 0.62 0.13 0.03 98.01 11:54:24 11:54:01 2 1.86 0.00 0.72 0.07 0.03 97.32 11:54:24 11:54:01 3 1.39 0.00 0.80 0.05 0.03 97.73 11:54:24 11:54:01 4 11.46 0.00 0.90 0.13 0.03 87.47 11:54:24 11:54:01 5 1.77 0.00 0.74 0.08 0.02 97.39 11:54:24 11:54:01 6 5.48 0.00 0.88 1.39 0.03 92.22 11:54:24 11:54:01 7 14.66 0.00 0.95 0.27 0.05 84.07 11:54:24 Average: all 8.06 0.00 1.75 1.82 0.05 88.32 11:54:24 Average: 0 6.63 0.00 1.61 1.30 0.04 90.42 11:54:24 Average: 1 9.81 0.00 1.84 1.28 0.05 87.03 11:54:24 Average: 2 5.99 0.00 1.42 0.71 0.04 91.84 11:54:24 Average: 3 8.35 0.00 2.04 3.42 0.05 86.14 11:54:24 Average: 4 8.63 0.00 1.82 2.76 0.05 86.74 11:54:24 Average: 5 9.40 0.00 1.79 1.30 0.04 87.48 11:54:24 Average: 6 6.95 0.00 1.62 2.23 0.05 89.15 11:54:24 Average: 7 8.78 0.00 1.87 1.53 0.06 87.76 11:54:24 11:54:24 11:54:24