14:53:20 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141302 14:53:20 Running as SYSTEM 14:53:20 [EnvInject] - Loading node environment variables. 14:53:20 Building remotely on prd-ubuntu1804-docker-8c-8g-21631 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 14:53:20 [ssh-agent] Looking for ssh-agent implementation... 14:53:20 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:53:20 $ ssh-agent 14:53:20 SSH_AUTH_SOCK=/tmp/ssh-kpMZXDwAtxo2/agent.2033 14:53:20 SSH_AGENT_PID=2035 14:53:20 [ssh-agent] Started. 14:53:20 Running ssh-add (command line suppressed) 14:53:20 Identity added: /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_342326272616935020.key (/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_342326272616935020.key) 14:53:20 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 14:53:20 The recommended git tool is: NONE 14:53:22 using credential onap-jenkins-ssh 14:53:22 Wiping out workspace first. 14:53:22 Cloning the remote Git repository 14:53:22 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 14:53:22 > git init /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp # timeout=10 14:53:22 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:53:22 > git --version # timeout=10 14:53:22 > git --version # 'git version 2.17.1' 14:53:22 using GIT_SSH to set credentials Gerrit user 14:53:22 Verifying host key using manually-configured host key entries 14:53:22 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 14:53:22 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:53:22 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:53:23 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:53:23 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:53:23 using GIT_SSH to set credentials Gerrit user 14:53:23 Verifying host key using manually-configured host key entries 14:53:23 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/02/141302/1 # timeout=30 14:53:23 > git rev-parse ed38a50541249063daf2cfb00b312fb173adeace^{commit} # timeout=10 14:53:23 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 14:53:23 Checking out Revision ed38a50541249063daf2cfb00b312fb173adeace (refs/changes/02/141302/1) 14:53:23 > git config core.sparsecheckout # timeout=10 14:53:23 > git checkout -f ed38a50541249063daf2cfb00b312fb173adeace # timeout=30 14:53:26 Commit message: "Remove python from the java app docker images" 14:53:26 > git rev-parse FETCH_HEAD^{commit} # timeout=10 14:53:26 > git rev-list --no-walk 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=10 14:53:26 provisioning config files... 14:53:26 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:53:26 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:53:26 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15322285197456450084.sh 14:53:26 ---> python-tools-install.sh 14:53:26 Setup pyenv: 14:53:26 * system (set by /opt/pyenv/version) 14:53:26 * 3.8.13 (set by /opt/pyenv/version) 14:53:26 * 3.9.13 (set by /opt/pyenv/version) 14:53:26 * 3.10.6 (set by /opt/pyenv/version) 14:53:31 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-YgXf 14:53:31 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:53:35 lf-activate-venv(): INFO: Installing: lftools 14:53:58 lf-activate-venv(): INFO: Adding /tmp/venv-YgXf/bin to PATH 14:53:58 Generating Requirements File 14:54:19 Python 3.10.6 14:54:19 pip 25.1.1 from /tmp/venv-YgXf/lib/python3.10/site-packages/pip (python 3.10) 14:54:19 appdirs==1.4.4 14:54:19 argcomplete==3.6.2 14:54:19 aspy.yaml==1.3.0 14:54:19 attrs==25.3.0 14:54:19 autopage==0.5.2 14:54:19 beautifulsoup4==4.13.4 14:54:19 boto3==1.38.36 14:54:19 botocore==1.38.36 14:54:19 bs4==0.0.2 14:54:19 cachetools==5.5.2 14:54:19 certifi==2025.6.15 14:54:19 cffi==1.17.1 14:54:19 cfgv==3.4.0 14:54:19 chardet==5.2.0 14:54:19 charset-normalizer==3.4.2 14:54:19 click==8.2.1 14:54:19 cliff==4.10.0 14:54:19 cmd2==2.6.1 14:54:19 cryptography==3.3.2 14:54:19 debtcollector==3.0.0 14:54:19 decorator==5.2.1 14:54:19 defusedxml==0.7.1 14:54:19 Deprecated==1.2.18 14:54:19 distlib==0.3.9 14:54:19 dnspython==2.7.0 14:54:19 docker==7.1.0 14:54:19 dogpile.cache==1.4.0 14:54:19 durationpy==0.10 14:54:19 email_validator==2.2.0 14:54:19 filelock==3.18.0 14:54:19 future==1.0.0 14:54:19 gitdb==4.0.12 14:54:19 GitPython==3.1.44 14:54:19 google-auth==2.40.3 14:54:19 httplib2==0.22.0 14:54:19 identify==2.6.12 14:54:19 idna==3.10 14:54:19 importlib-resources==1.5.0 14:54:19 iso8601==2.1.0 14:54:19 Jinja2==3.1.6 14:54:19 jmespath==1.0.1 14:54:19 jsonpatch==1.33 14:54:19 jsonpointer==3.0.0 14:54:19 jsonschema==4.24.0 14:54:19 jsonschema-specifications==2025.4.1 14:54:19 keystoneauth1==5.11.1 14:54:19 kubernetes==33.1.0 14:54:19 lftools==0.37.13 14:54:19 lxml==5.4.0 14:54:19 MarkupSafe==3.0.2 14:54:19 msgpack==1.1.1 14:54:19 multi_key_dict==2.0.3 14:54:19 munch==4.0.0 14:54:19 netaddr==1.3.0 14:54:19 niet==1.4.2 14:54:19 nodeenv==1.9.1 14:54:19 oauth2client==4.1.3 14:54:19 oauthlib==3.2.2 14:54:19 openstacksdk==4.6.0 14:54:19 os-client-config==2.1.0 14:54:19 os-service-types==1.7.0 14:54:19 osc-lib==4.0.2 14:54:19 oslo.config==9.8.0 14:54:19 oslo.context==6.0.0 14:54:19 oslo.i18n==6.5.1 14:54:19 oslo.log==7.1.0 14:54:19 oslo.serialization==5.7.0 14:54:19 oslo.utils==9.0.0 14:54:19 packaging==25.0 14:54:19 pbr==6.1.1 14:54:19 platformdirs==4.3.8 14:54:19 prettytable==3.16.0 14:54:19 psutil==7.0.0 14:54:19 pyasn1==0.6.1 14:54:19 pyasn1_modules==0.4.2 14:54:19 pycparser==2.22 14:54:19 pygerrit2==2.0.15 14:54:19 PyGithub==2.6.1 14:54:19 PyJWT==2.10.1 14:54:19 PyNaCl==1.5.0 14:54:19 pyparsing==2.4.7 14:54:19 pyperclip==1.9.0 14:54:19 pyrsistent==0.20.0 14:54:19 python-cinderclient==9.7.0 14:54:19 python-dateutil==2.9.0.post0 14:54:19 python-heatclient==4.2.0 14:54:19 python-jenkins==1.8.2 14:54:19 python-keystoneclient==5.6.0 14:54:19 python-magnumclient==4.8.1 14:54:19 python-openstackclient==8.1.0 14:54:19 python-swiftclient==4.8.0 14:54:19 PyYAML==6.0.2 14:54:19 referencing==0.36.2 14:54:19 requests==2.32.4 14:54:19 requests-oauthlib==2.0.0 14:54:19 requestsexceptions==1.4.0 14:54:19 rfc3986==2.0.0 14:54:19 rpds-py==0.25.1 14:54:19 rsa==4.9.1 14:54:19 ruamel.yaml==0.18.14 14:54:19 ruamel.yaml.clib==0.2.12 14:54:19 s3transfer==0.13.0 14:54:19 simplejson==3.20.1 14:54:19 six==1.17.0 14:54:19 smmap==5.0.2 14:54:19 soupsieve==2.7 14:54:19 stevedore==5.4.1 14:54:19 tabulate==0.9.0 14:54:19 toml==0.10.2 14:54:19 tomlkit==0.13.3 14:54:19 tqdm==4.67.1 14:54:19 typing_extensions==4.14.0 14:54:19 tzdata==2025.2 14:54:19 urllib3==1.26.20 14:54:19 virtualenv==20.31.2 14:54:19 wcwidth==0.2.13 14:54:19 websocket-client==1.8.0 14:54:19 wrapt==1.17.2 14:54:19 xdg==6.0.0 14:54:19 xmltodict==0.14.2 14:54:19 yq==3.4.3 14:54:19 [EnvInject] - Injecting environment variables from a build step. 14:54:19 [EnvInject] - Injecting as environment variables the properties content 14:54:19 SET_JDK_VERSION=openjdk17 14:54:19 GIT_URL="git://cloud.onap.org/mirror" 14:54:19 14:54:19 [EnvInject] - Variables injected successfully. 14:54:19 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh /tmp/jenkins1271937717372570665.sh 14:54:19 ---> update-java-alternatives.sh 14:54:19 ---> Updating Java version 14:54:19 ---> Ubuntu/Debian system detected 14:54:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 14:54:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 14:54:20 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 14:54:20 openjdk version "17.0.4" 2022-07-19 14:54:20 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 14:54:20 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 14:54:20 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 14:54:20 [EnvInject] - Injecting environment variables from a build step. 14:54:20 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 14:54:20 [EnvInject] - Variables injected successfully. 14:54:20 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh -xe /tmp/jenkins11933914850406441134.sh 14:54:20 + /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/run-project-csit.sh opa-pdp 14:54:20 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 14:54:21 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 14:54:21 Configure a credential helper to remove this warning. See 14:54:21 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 14:54:21 14:54:21 Login Succeeded 14:54:21 docker: 'compose' is not a docker command. 14:54:21 See 'docker --help' 14:54:21 Docker Compose Plugin not installed. Installing now... 14:54:21 % Total % Received % Xferd Average Speed Time Time Time Current 14:54:21 Dload Upload Total Spent Left Speed 14:54:21 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 14:54:24 7 60.2M 7 4543k 0 0 3290k 0 0:00:18 0:00:01 0:00:17 3290k 16 60.2M 16 10.0M 0 0 5333k 0 0:00:11 0:00:01 0:00:10 10.3M 100 60.2M 100 60.2M 0 0 24.9M 0 0:00:02 0:00:02 --:--:-- 53.9M 14:54:24 Setting project configuration for: opa-pdp 14:54:24 Configuring docker compose... 14:54:26 Starting opa-pdp using postgres + Grafana/Prometheus 14:54:27 api Pulling 14:54:27 postgres Pulling 14:54:27 opa-pdp Pulling 14:54:27 prometheus Pulling 14:54:27 kafka Pulling 14:54:27 zookeeper Pulling 14:54:27 grafana Pulling 14:54:27 policy-db-migrator Pulling 14:54:27 pap Pulling 14:54:27 da9db072f522 Pulling fs layer 14:54:27 19ede2622bd6 Pulling fs layer 14:54:27 81f92f6326a0 Pulling fs layer 14:54:27 774184111a51 Pulling fs layer 14:54:27 ba3bfa42d232 Pulling fs layer 14:54:27 8e7191d1a9d6 Pulling fs layer 14:54:27 43449fa9f0bf Pulling fs layer 14:54:27 25fd4437207e Pulling fs layer 14:54:27 774184111a51 Waiting 14:54:27 ba3bfa42d232 Waiting 14:54:27 8e7191d1a9d6 Waiting 14:54:27 43449fa9f0bf Waiting 14:54:27 25fd4437207e Waiting 14:54:27 f90c8eb4724c Pulling fs layer 14:54:27 2b1b549e99de Pulling fs layer 14:54:27 547372ea8ffa Pulling fs layer 14:54:27 65d25c0f02f3 Pulling fs layer 14:54:27 90dd78f85976 Pulling fs layer 14:54:27 4f4fb700ef54 Pulling fs layer 14:54:27 547372ea8ffa Waiting 14:54:27 65d25c0f02f3 Waiting 14:54:27 90dd78f85976 Waiting 14:54:27 4f4fb700ef54 Waiting 14:54:27 f90c8eb4724c Waiting 14:54:27 2b1b549e99de Waiting 14:54:27 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:54:27 da9db072f522 Pulling fs layer 14:54:27 96e38c8865ba Pulling fs layer 14:54:27 e5d7009d9e55 Pulling fs layer 14:54:27 1ec5fb03eaee Pulling fs layer 14:54:27 d3165a332ae3 Pulling fs layer 14:54:27 c124ba1a8b26 Pulling fs layer 14:54:27 6394804c2196 Pulling fs layer 14:54:27 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:54:27 96e38c8865ba Waiting 14:54:27 e5d7009d9e55 Waiting 14:54:27 d3165a332ae3 Waiting 14:54:27 c124ba1a8b26 Waiting 14:54:27 6394804c2196 Waiting 14:54:27 1ec5fb03eaee Waiting 14:54:27 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB 14:54:27 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB 14:54:27 eca0188f477e Pulling fs layer 14:54:27 e444bcd4d577 Pulling fs layer 14:54:27 eabd8714fec9 Pulling fs layer 14:54:27 45fd2fec8a19 Pulling fs layer 14:54:27 8f10199ed94b Pulling fs layer 14:54:27 f963a77d2726 Pulling fs layer 14:54:27 f3a82e9f1761 Pulling fs layer 14:54:27 79161a3f5362 Pulling fs layer 14:54:27 9c266ba63f51 Pulling fs layer 14:54:27 2e8a7df9c2ee Pulling fs layer 14:54:27 10f05dd8b1db Pulling fs layer 14:54:27 eabd8714fec9 Waiting 14:54:27 45fd2fec8a19 Waiting 14:54:27 41dac8b43ba6 Pulling fs layer 14:54:27 8f10199ed94b Waiting 14:54:27 71a9f6a9ab4d Pulling fs layer 14:54:27 f963a77d2726 Waiting 14:54:27 da3ed5db7103 Pulling fs layer 14:54:27 f3a82e9f1761 Waiting 14:54:27 10f05dd8b1db Waiting 14:54:27 c955f6e31a04 Pulling fs layer 14:54:27 79161a3f5362 Waiting 14:54:27 41dac8b43ba6 Waiting 14:54:27 9c266ba63f51 Waiting 14:54:27 71a9f6a9ab4d Waiting 14:54:27 2e8a7df9c2ee Waiting 14:54:27 eca0188f477e Waiting 14:54:27 da3ed5db7103 Waiting 14:54:27 e444bcd4d577 Waiting 14:54:27 c955f6e31a04 Waiting 14:54:27 da9db072f522 Pulling fs layer 14:54:27 96e38c8865ba Pulling fs layer 14:54:27 5e06c6bed798 Pulling fs layer 14:54:27 684be6598fc9 Pulling fs layer 14:54:27 0d92cad902ba Pulling fs layer 14:54:27 dcc0c3b2850c Pulling fs layer 14:54:27 eb7cda286a15 Pulling fs layer 14:54:27 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:54:27 96e38c8865ba Waiting 14:54:27 5e06c6bed798 Waiting 14:54:27 684be6598fc9 Waiting 14:54:27 0d92cad902ba Waiting 14:54:27 dcc0c3b2850c Waiting 14:54:27 f18232174bc9 Pulling fs layer 14:54:27 e60d9caeb0b8 Pulling fs layer 14:54:27 f61a19743345 Pulling fs layer 14:54:27 8af57d8c9f49 Pulling fs layer 14:54:27 c53a11b7c6fc Pulling fs layer 14:54:27 e032d0a5e409 Pulling fs layer 14:54:27 c49e0ee60bfb Pulling fs layer 14:54:27 384497dbce3b Pulling fs layer 14:54:27 055b9255fa03 Pulling fs layer 14:54:27 b176d7edde70 Pulling fs layer 14:54:27 f18232174bc9 Waiting 14:54:27 e60d9caeb0b8 Waiting 14:54:27 f61a19743345 Waiting 14:54:27 e032d0a5e409 Waiting 14:54:27 8af57d8c9f49 Waiting 14:54:27 c53a11b7c6fc Waiting 14:54:27 384497dbce3b Waiting 14:54:27 055b9255fa03 Waiting 14:54:27 c49e0ee60bfb Waiting 14:54:27 9fa9226be034 Pulling fs layer 14:54:27 1617e25568b2 Pulling fs layer 14:54:27 6ac0e4adf315 Pulling fs layer 14:54:27 f3b09c502777 Pulling fs layer 14:54:27 408012a7b118 Pulling fs layer 14:54:27 44986281b8b9 Pulling fs layer 14:54:27 bf70c5107ab5 Pulling fs layer 14:54:27 1ccde423731d Pulling fs layer 14:54:27 7221d93db8a9 Pulling fs layer 14:54:27 7df673c7455d Pulling fs layer 14:54:27 1617e25568b2 Waiting 14:54:27 9fa9226be034 Waiting 14:54:27 44986281b8b9 Waiting 14:54:27 bf70c5107ab5 Waiting 14:54:27 f3b09c502777 Waiting 14:54:27 408012a7b118 Waiting 14:54:27 1ccde423731d Waiting 14:54:27 7df673c7455d Waiting 14:54:27 6ac0e4adf315 Waiting 14:54:27 1e017ebebdbd Pulling fs layer 14:54:27 55f2b468da67 Pulling fs layer 14:54:27 82bfc142787e Pulling fs layer 14:54:27 46baca71a4ef Pulling fs layer 14:54:27 b0e0ef7895f4 Pulling fs layer 14:54:27 c0c90eeb8aca Pulling fs layer 14:54:27 5cfb27c10ea5 Pulling fs layer 14:54:27 40a5eed61bb0 Pulling fs layer 14:54:27 e040ea11fa10 Pulling fs layer 14:54:27 09d5a3f70313 Pulling fs layer 14:54:27 356f5c2c843b Pulling fs layer 14:54:27 1e017ebebdbd Waiting 14:54:27 55f2b468da67 Waiting 14:54:27 82bfc142787e Waiting 14:54:27 46baca71a4ef Waiting 14:54:27 b0e0ef7895f4 Waiting 14:54:27 09d5a3f70313 Waiting 14:54:27 c0c90eeb8aca Waiting 14:54:27 356f5c2c843b Waiting 14:54:27 5cfb27c10ea5 Waiting 14:54:27 40a5eed61bb0 Waiting 14:54:27 e040ea11fa10 Waiting 14:54:27 da9db072f522 Download complete 14:54:27 da9db072f522 Download complete 14:54:27 da9db072f522 Download complete 14:54:27 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:54:27 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:54:27 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:54:27 2d429b9e73a6 Pulling fs layer 14:54:27 46eab5b44a35 Pulling fs layer 14:54:27 c4d302cc468d Pulling fs layer 14:54:27 01e0882c90d9 Pulling fs layer 14:54:27 531ee2cf3c0c Pulling fs layer 14:54:27 ed54a7dee1d8 Pulling fs layer 14:54:27 12c5c803443f Pulling fs layer 14:54:27 e27c75a98748 Pulling fs layer 14:54:27 e73cb4a42719 Pulling fs layer 14:54:27 a83b68436f09 Pulling fs layer 14:54:27 2d429b9e73a6 Waiting 14:54:27 787d6bee9571 Pulling fs layer 14:54:27 13ff0988aaea Pulling fs layer 14:54:27 4b82842ab819 Pulling fs layer 14:54:27 7e568a0dc8fb Pulling fs layer 14:54:27 46eab5b44a35 Waiting 14:54:27 c4d302cc468d Waiting 14:54:27 01e0882c90d9 Waiting 14:54:27 12c5c803443f Waiting 14:54:27 e27c75a98748 Waiting 14:54:27 531ee2cf3c0c Waiting 14:54:27 e73cb4a42719 Waiting 14:54:27 a83b68436f09 Waiting 14:54:27 ed54a7dee1d8 Waiting 14:54:27 787d6bee9571 Waiting 14:54:27 13ff0988aaea Waiting 14:54:27 4b82842ab819 Waiting 14:54:27 7e568a0dc8fb Waiting 14:54:27 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 14:54:27 774184111a51 Verifying Checksum 14:54:27 774184111a51 Download complete 14:54:27 da9db072f522 Extracting [============> ] 917.5kB/3.624MB 14:54:27 da9db072f522 Extracting [============> ] 917.5kB/3.624MB 14:54:27 da9db072f522 Extracting [============> ] 917.5kB/3.624MB 14:54:27 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:54:27 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:54:27 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:54:28 da9db072f522 Pull complete 14:54:28 da9db072f522 Pull complete 14:54:28 da9db072f522 Pull complete 14:54:28 81f92f6326a0 Downloading [===================> ] 5.602MB/14.63MB 14:54:28 19ede2622bd6 Downloading [====> ] 5.946MB/71.91MB 14:54:28 81f92f6326a0 Verifying Checksum 14:54:28 81f92f6326a0 Download complete 14:54:28 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 14:54:28 8e7191d1a9d6 Verifying Checksum 14:54:28 8e7191d1a9d6 Download complete 14:54:28 43449fa9f0bf Downloading [==================================================>] 1.037kB/1.037kB 14:54:28 43449fa9f0bf Verifying Checksum 14:54:28 43449fa9f0bf Download complete 14:54:28 19ede2622bd6 Downloading [=============> ] 19.46MB/71.91MB 14:54:28 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 14:54:28 25fd4437207e Downloading [==================================================>] 19.52kB/19.52kB 14:54:28 25fd4437207e Verifying Checksum 14:54:28 25fd4437207e Download complete 14:54:28 f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 14:54:28 19ede2622bd6 Downloading [========================> ] 35.14MB/71.91MB 14:54:28 f90c8eb4724c Downloading [================> ] 10.27MB/30.59MB 14:54:28 19ede2622bd6 Downloading [===================================> ] 51.36MB/71.91MB 14:54:28 f90c8eb4724c Downloading [========================================> ] 24.9MB/30.59MB 14:54:28 f90c8eb4724c Verifying Checksum 14:54:28 f90c8eb4724c Download complete 14:54:28 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 14:54:28 19ede2622bd6 Downloading [===============================================> ] 68.12MB/71.91MB 14:54:28 19ede2622bd6 Verifying Checksum 14:54:28 19ede2622bd6 Download complete 14:54:28 2b1b549e99de Verifying Checksum 14:54:28 2b1b549e99de Download complete 14:54:28 547372ea8ffa Downloading [> ] 130kB/12.63MB 14:54:28 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 14:54:28 f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 14:54:28 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 14:54:28 547372ea8ffa Downloading [=====================================> ] 9.436MB/12.63MB 14:54:28 65d25c0f02f3 Downloading [=================> ] 10.32MB/28.98MB 14:54:28 547372ea8ffa Download complete 14:54:28 f90c8eb4724c Extracting [=======> ] 4.588MB/30.59MB 14:54:28 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 14:54:28 19ede2622bd6 Extracting [===> ] 4.456MB/71.91MB 14:54:28 65d25c0f02f3 Downloading [===========================================> ] 25.36MB/28.98MB 14:54:28 f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 14:54:28 65d25c0f02f3 Verifying Checksum 14:54:28 ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB 14:54:28 ba3bfa42d232 Downloading [==================================================>] 5.244kB/5.244kB 14:54:28 ba3bfa42d232 Download complete 14:54:28 4f4fb700ef54 Downloading [==================================================>] 32B/32B 14:54:28 4f4fb700ef54 Verifying Checksum 14:54:28 4f4fb700ef54 Download complete 14:54:28 90dd78f85976 Downloading [==========> ] 8.519MB/41.49MB 14:54:28 e5d7009d9e55 Downloading [==================================================>] 295B/295B 14:54:28 e5d7009d9e55 Download complete 14:54:28 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 14:54:28 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 14:54:28 1ec5fb03eaee Verifying Checksum 14:54:28 1ec5fb03eaee Download complete 14:54:28 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 14:54:28 d3165a332ae3 Verifying Checksum 14:54:28 d3165a332ae3 Download complete 14:54:28 19ede2622bd6 Extracting [======> ] 8.913MB/71.91MB 14:54:28 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 14:54:28 f90c8eb4724c Extracting [=================> ] 10.49MB/30.59MB 14:54:28 90dd78f85976 Downloading [=========================> ] 20.87MB/41.49MB 14:54:28 19ede2622bd6 Extracting [=========> ] 13.93MB/71.91MB 14:54:29 c124ba1a8b26 Downloading [=====> ] 9.19MB/91.87MB 14:54:29 f90c8eb4724c Extracting [========================> ] 14.75MB/30.59MB 14:54:29 90dd78f85976 Downloading [==============================================> ] 38.34MB/41.49MB 14:54:29 90dd78f85976 Verifying Checksum 14:54:29 90dd78f85976 Download complete 14:54:29 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 14:54:29 6394804c2196 Verifying Checksum 14:54:29 6394804c2196 Download complete 14:54:29 eca0188f477e Downloading [> ] 375.7kB/37.17MB 14:54:29 19ede2622bd6 Extracting [=============> ] 19.5MB/71.91MB 14:54:29 c124ba1a8b26 Downloading [============> ] 22.71MB/91.87MB 14:54:29 f90c8eb4724c Extracting [===============================> ] 19.33MB/30.59MB 14:54:29 eca0188f477e Downloading [======> ] 4.521MB/37.17MB 14:54:29 19ede2622bd6 Extracting [================> ] 23.95MB/71.91MB 14:54:29 c124ba1a8b26 Downloading [====================> ] 37.85MB/91.87MB 14:54:29 f90c8eb4724c Extracting [======================================> ] 23.59MB/30.59MB 14:54:29 eca0188f477e Downloading [==================> ] 13.56MB/37.17MB 14:54:29 c124ba1a8b26 Downloading [============================> ] 52.44MB/91.87MB 14:54:29 19ede2622bd6 Extracting [===================> ] 28.41MB/71.91MB 14:54:29 f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 14:54:29 eca0188f477e Downloading [==================================> ] 26MB/37.17MB 14:54:29 c124ba1a8b26 Downloading [====================================> ] 66.5MB/91.87MB 14:54:29 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:54:29 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:54:29 19ede2622bd6 Extracting [=======================> ] 33.42MB/71.91MB 14:54:29 f90c8eb4724c Extracting [===============================================> ] 28.84MB/30.59MB 14:54:29 eca0188f477e Verifying Checksum 14:54:29 eca0188f477e Download complete 14:54:29 e444bcd4d577 Downloading [==================================================>] 279B/279B 14:54:29 e444bcd4d577 Verifying Checksum 14:54:29 e444bcd4d577 Download complete 14:54:29 c124ba1a8b26 Downloading [============================================> ] 81.64MB/91.87MB 14:54:29 96e38c8865ba Downloading [======> ] 9.19MB/71.91MB 14:54:29 96e38c8865ba Downloading [======> ] 9.19MB/71.91MB 14:54:29 eabd8714fec9 Downloading [> ] 539.6kB/375MB 14:54:29 19ede2622bd6 Extracting [=========================> ] 36.21MB/71.91MB 14:54:29 c124ba1a8b26 Verifying Checksum 14:54:29 c124ba1a8b26 Download complete 14:54:29 f90c8eb4724c Extracting [=================================================> ] 30.47MB/30.59MB 14:54:29 eca0188f477e Extracting [> ] 393.2kB/37.17MB 14:54:29 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 14:54:29 45fd2fec8a19 Verifying Checksum 14:54:29 45fd2fec8a19 Download complete 14:54:29 eabd8714fec9 Downloading [> ] 6.487MB/375MB 14:54:29 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 14:54:29 96e38c8865ba Downloading [==============> ] 21.09MB/71.91MB 14:54:29 96e38c8865ba Downloading [==============> ] 21.09MB/71.91MB 14:54:29 f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 14:54:29 19ede2622bd6 Extracting [===========================> ] 38.99MB/71.91MB 14:54:29 eca0188f477e Extracting [=====> ] 3.932MB/37.17MB 14:54:29 eabd8714fec9 Downloading [==> ] 16.76MB/375MB 14:54:29 8f10199ed94b Downloading [=================> ] 3.046MB/8.768MB 14:54:29 96e38c8865ba Downloading [========================> ] 35.68MB/71.91MB 14:54:29 96e38c8865ba Downloading [========================> ] 35.68MB/71.91MB 14:54:29 f90c8eb4724c Pull complete 14:54:29 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 14:54:29 19ede2622bd6 Extracting [=============================> ] 42.89MB/71.91MB 14:54:29 8f10199ed94b Verifying Checksum 14:54:29 8f10199ed94b Download complete 14:54:29 eca0188f477e Extracting [========> ] 6.685MB/37.17MB 14:54:29 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 14:54:29 f963a77d2726 Download complete 14:54:29 eabd8714fec9 Downloading [===> ] 29.74MB/375MB 14:54:29 96e38c8865ba Downloading [====================================> ] 51.9MB/71.91MB 14:54:29 96e38c8865ba Downloading [====================================> ] 51.9MB/71.91MB 14:54:29 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 14:54:29 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB 14:54:29 19ede2622bd6 Extracting [===============================> ] 45.68MB/71.91MB 14:54:29 eca0188f477e Extracting [==============> ] 11.01MB/37.17MB 14:54:29 eabd8714fec9 Downloading [=====> ] 44.33MB/375MB 14:54:29 96e38c8865ba Downloading [===============================================> ] 68.66MB/71.91MB 14:54:29 96e38c8865ba Downloading [===============================================> ] 68.66MB/71.91MB 14:54:29 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 14:54:29 96e38c8865ba Verifying Checksum 14:54:29 96e38c8865ba Download complete 14:54:29 96e38c8865ba Verifying Checksum 14:54:29 96e38c8865ba Download complete 14:54:29 f3a82e9f1761 Downloading [===> ] 2.751MB/44.41MB 14:54:29 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 14:54:29 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 14:54:29 79161a3f5362 Verifying Checksum 14:54:29 79161a3f5362 Download complete 14:54:29 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 14:54:29 9c266ba63f51 Verifying Checksum 14:54:29 9c266ba63f51 Download complete 14:54:30 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 14:54:30 2e8a7df9c2ee Verifying Checksum 14:54:30 2e8a7df9c2ee Download complete 14:54:30 19ede2622bd6 Extracting [=================================> ] 48.46MB/71.91MB 14:54:30 2b1b549e99de Pull complete 14:54:30 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 14:54:30 eca0188f477e Extracting [====================> ] 15.34MB/37.17MB 14:54:30 10f05dd8b1db Downloading [==================================================>] 98B/98B 14:54:30 10f05dd8b1db Verifying Checksum 14:54:30 10f05dd8b1db Download complete 14:54:30 eabd8714fec9 Downloading [=======> ] 58.39MB/375MB 14:54:30 41dac8b43ba6 Downloading [==================================================>] 171B/171B 14:54:30 41dac8b43ba6 Verifying Checksum 14:54:30 41dac8b43ba6 Download complete 14:54:30 f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 14:54:30 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 14:54:30 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 14:54:30 71a9f6a9ab4d Verifying Checksum 14:54:30 71a9f6a9ab4d Download complete 14:54:30 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:54:30 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:54:30 19ede2622bd6 Extracting [===================================> ] 51.25MB/71.91MB 14:54:30 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB 14:54:30 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 14:54:30 eca0188f477e Extracting [==========================> ] 19.66MB/37.17MB 14:54:30 eabd8714fec9 Downloading [=========> ] 71.91MB/375MB 14:54:30 f3a82e9f1761 Downloading [====================> ] 17.89MB/44.41MB 14:54:30 96e38c8865ba Extracting [==> ] 3.899MB/71.91MB 14:54:30 96e38c8865ba Extracting [==> ] 3.899MB/71.91MB 14:54:30 547372ea8ffa Extracting [==============> ] 3.67MB/12.63MB 14:54:30 19ede2622bd6 Extracting [======================================> ] 55.15MB/71.91MB 14:54:30 eabd8714fec9 Downloading [==========> ] 81.64MB/375MB 14:54:30 da3ed5db7103 Downloading [==> ] 6.487MB/127.4MB 14:54:30 eca0188f477e Extracting [================================> ] 23.99MB/37.17MB 14:54:30 f3a82e9f1761 Downloading [=====================================> ] 33.49MB/44.41MB 14:54:30 96e38c8865ba Extracting [====> ] 6.685MB/71.91MB 14:54:30 96e38c8865ba Extracting [====> ] 6.685MB/71.91MB 14:54:30 547372ea8ffa Extracting [============================> ] 7.209MB/12.63MB 14:54:30 eabd8714fec9 Downloading [============> ] 95.16MB/375MB 14:54:30 19ede2622bd6 Extracting [========================================> ] 58.49MB/71.91MB 14:54:30 da3ed5db7103 Downloading [======> ] 16.22MB/127.4MB 14:54:30 f3a82e9f1761 Verifying Checksum 14:54:30 f3a82e9f1761 Download complete 14:54:30 eca0188f477e Extracting [=====================================> ] 27.53MB/37.17MB 14:54:30 96e38c8865ba Extracting [=======> ] 10.58MB/71.91MB 14:54:30 96e38c8865ba Extracting [=======> ] 10.58MB/71.91MB 14:54:30 547372ea8ffa Extracting [=========================================> ] 10.49MB/12.63MB 14:54:30 eabd8714fec9 Downloading [=============> ] 104.9MB/375MB 14:54:30 19ede2622bd6 Extracting [============================================> ] 63.5MB/71.91MB 14:54:30 da3ed5db7103 Downloading [========> ] 20.54MB/127.4MB 14:54:30 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 14:54:30 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 14:54:30 c955f6e31a04 Verifying Checksum 14:54:30 c955f6e31a04 Download complete 14:54:30 eca0188f477e Extracting [=========================================> ] 31.06MB/37.17MB 14:54:30 5e06c6bed798 Download complete 14:54:30 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 14:54:30 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB 14:54:30 96e38c8865ba Extracting [=========> ] 13.93MB/71.91MB 14:54:30 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 14:54:30 684be6598fc9 Download complete 14:54:30 eabd8714fec9 Downloading [===============> ] 119.5MB/375MB 14:54:30 da3ed5db7103 Downloading [============> ] 32.44MB/127.4MB 14:54:30 19ede2622bd6 Extracting [===============================================> ] 67.96MB/71.91MB 14:54:30 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 14:54:30 0d92cad902ba Verifying Checksum 14:54:30 0d92cad902ba Download complete 14:54:30 547372ea8ffa Pull complete 14:54:30 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 14:54:30 eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB 14:54:30 96e38c8865ba Extracting [============> ] 17.27MB/71.91MB 14:54:30 96e38c8865ba Extracting [============> ] 17.27MB/71.91MB 14:54:30 eabd8714fec9 Downloading [=================> ] 134.1MB/375MB 14:54:30 da3ed5db7103 Downloading [==================> ] 47.58MB/127.4MB 14:54:30 19ede2622bd6 Extracting [=================================================> ] 71.3MB/71.91MB 14:54:30 dcc0c3b2850c Downloading [===> ] 5.406MB/76.12MB 14:54:30 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 14:54:30 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 14:54:30 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 14:54:30 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 14:54:30 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 14:54:30 eabd8714fec9 Downloading [===================> ] 144.4MB/375MB 14:54:30 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB 14:54:30 da3ed5db7103 Downloading [=======================> ] 60.01MB/127.4MB 14:54:30 eca0188f477e Pull complete 14:54:30 e444bcd4d577 Extracting [==================================================>] 279B/279B 14:54:30 e444bcd4d577 Extracting [==================================================>] 279B/279B 14:54:30 dcc0c3b2850c Downloading [=========> ] 14.06MB/76.12MB 14:54:30 65d25c0f02f3 Extracting [======> ] 3.539MB/28.98MB 14:54:30 19ede2622bd6 Pull complete 14:54:30 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB 14:54:30 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 14:54:30 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 14:54:30 eabd8714fec9 Downloading [=====================> ] 157.9MB/375MB 14:54:30 da3ed5db7103 Downloading [=============================> ] 74.61MB/127.4MB 14:54:30 dcc0c3b2850c Downloading [===============> ] 23.79MB/76.12MB 14:54:30 65d25c0f02f3 Extracting [===========> ] 6.488MB/28.98MB 14:54:30 81f92f6326a0 Extracting [=> ] 327.7kB/14.63MB 14:54:30 eabd8714fec9 Downloading [======================> ] 171.9MB/375MB 14:54:30 e444bcd4d577 Pull complete 14:54:30 da3ed5db7103 Downloading [===================================> ] 90.29MB/127.4MB 14:54:30 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 14:54:30 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 14:54:31 dcc0c3b2850c Downloading [=========================> ] 38.39MB/76.12MB 14:54:31 65d25c0f02f3 Extracting [=================> ] 10.32MB/28.98MB 14:54:31 81f92f6326a0 Extracting [===============> ] 4.588MB/14.63MB 14:54:31 eabd8714fec9 Downloading [========================> ] 184.9MB/375MB 14:54:31 da3ed5db7103 Downloading [=========================================> ] 105.4MB/127.4MB 14:54:31 96e38c8865ba Extracting [=====================> ] 31.2MB/71.91MB 14:54:31 96e38c8865ba Extracting [=====================> ] 31.2MB/71.91MB 14:54:31 dcc0c3b2850c Downloading [=================================> ] 51.36MB/76.12MB 14:54:31 65d25c0f02f3 Extracting [======================> ] 13.27MB/28.98MB 14:54:31 81f92f6326a0 Extracting [=======================> ] 6.881MB/14.63MB 14:54:31 eabd8714fec9 Downloading [==========================> ] 200.6MB/375MB 14:54:31 da3ed5db7103 Downloading [==============================================> ] 119.5MB/127.4MB 14:54:31 96e38c8865ba Extracting [========================> ] 35.09MB/71.91MB 14:54:31 96e38c8865ba Extracting [========================> ] 35.09MB/71.91MB 14:54:31 dcc0c3b2850c Downloading [===========================================> ] 66.5MB/76.12MB 14:54:31 da3ed5db7103 Verifying Checksum 14:54:31 da3ed5db7103 Download complete 14:54:31 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 14:54:31 eb7cda286a15 Verifying Checksum 14:54:31 eb7cda286a15 Download complete 14:54:31 65d25c0f02f3 Extracting [=============================> ] 17.1MB/28.98MB 14:54:31 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 14:54:31 eabd8714fec9 Downloading [============================> ] 214.1MB/375MB 14:54:31 81f92f6326a0 Extracting [============================> ] 8.356MB/14.63MB 14:54:31 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 14:54:31 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 14:54:31 dcc0c3b2850c Verifying Checksum 14:54:31 dcc0c3b2850c Download complete 14:54:31 e60d9caeb0b8 Downloading [==================================================>] 140B/140B 14:54:31 e60d9caeb0b8 Verifying Checksum 14:54:31 e60d9caeb0b8 Download complete 14:54:31 f18232174bc9 Verifying Checksum 14:54:31 f18232174bc9 Download complete 14:54:31 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 14:54:31 f61a19743345 Downloading [> ] 48.06kB/3.524MB 14:54:31 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB 14:54:31 65d25c0f02f3 Extracting [=====================================> ] 21.53MB/28.98MB 14:54:31 eabd8714fec9 Downloading [==============================> ] 227.1MB/375MB 14:54:31 f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB 14:54:31 f61a19743345 Verifying Checksum 14:54:31 f61a19743345 Download complete 14:54:31 c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB 14:54:31 c53a11b7c6fc Download complete 14:54:31 81f92f6326a0 Extracting [======================================> ] 11.3MB/14.63MB 14:54:31 e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB 14:54:31 e032d0a5e409 Downloading [==================================================>] 27.77kB/27.77kB 14:54:31 e032d0a5e409 Verifying Checksum 14:54:31 e032d0a5e409 Download complete 14:54:31 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 14:54:31 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 14:54:31 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 14:54:31 8af57d8c9f49 Downloading [========================================> ] 7.077MB/8.735MB 14:54:31 c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB 14:54:31 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 14:54:31 8af57d8c9f49 Verifying Checksum 14:54:31 8af57d8c9f49 Download complete 14:54:31 eabd8714fec9 Downloading [================================> ] 241.1MB/375MB 14:54:31 65d25c0f02f3 Pull complete 14:54:31 384497dbce3b Downloading [> ] 539.6kB/63.48MB 14:54:31 81f92f6326a0 Extracting [==============================================> ] 13.6MB/14.63MB 14:54:31 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 14:54:31 96e38c8865ba Extracting [===============================> ] 45.12MB/71.91MB 14:54:31 96e38c8865ba Extracting [===============================> ] 45.12MB/71.91MB 14:54:31 f18232174bc9 Extracting [=======================================> ] 2.884MB/3.642MB 14:54:31 c49e0ee60bfb Downloading [====> ] 9.19MB/107.3MB 14:54:31 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 14:54:31 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 14:54:31 eabd8714fec9 Downloading [==================================> ] 259MB/375MB 14:54:31 90dd78f85976 Extracting [> ] 426kB/41.49MB 14:54:31 81f92f6326a0 Pull complete 14:54:31 384497dbce3b Downloading [=======> ] 9.19MB/63.48MB 14:54:31 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 14:54:31 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 14:54:31 f18232174bc9 Pull complete 14:54:31 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 14:54:31 e60d9caeb0b8 Extracting [==================================================>] 140B/140B 14:54:31 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 14:54:31 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 14:54:31 c49e0ee60bfb Downloading [=========> ] 19.46MB/107.3MB 14:54:31 eabd8714fec9 Downloading [====================================> ] 272.5MB/375MB 14:54:31 384497dbce3b Downloading [===============> ] 19.46MB/63.48MB 14:54:31 90dd78f85976 Extracting [====> ] 3.834MB/41.49MB 14:54:31 c49e0ee60bfb Downloading [================> ] 36.22MB/107.3MB 14:54:31 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 14:54:31 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 14:54:31 eabd8714fec9 Downloading [======================================> ] 287.6MB/375MB 14:54:31 384497dbce3b Downloading [===========================> ] 34.6MB/63.48MB 14:54:31 e60d9caeb0b8 Pull complete 14:54:31 90dd78f85976 Extracting [=========> ] 7.668MB/41.49MB 14:54:31 774184111a51 Pull complete 14:54:31 f61a19743345 Extracting [> ] 65.54kB/3.524MB 14:54:31 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 14:54:31 ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 14:54:31 c49e0ee60bfb Downloading [=======================> ] 50.82MB/107.3MB 14:54:31 96e38c8865ba Extracting [=====================================> ] 53.48MB/71.91MB 14:54:31 96e38c8865ba Extracting [=====================================> ] 53.48MB/71.91MB 14:54:31 eabd8714fec9 Downloading [========================================> ] 302.8MB/375MB 14:54:31 384497dbce3b Downloading [======================================> ] 49.2MB/63.48MB 14:54:31 f61a19743345 Extracting [====> ] 327.7kB/3.524MB 14:54:31 90dd78f85976 Extracting [============> ] 10.65MB/41.49MB 14:54:31 c49e0ee60bfb Downloading [===============================> ] 67.04MB/107.3MB 14:54:31 ba3bfa42d232 Pull complete 14:54:31 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 14:54:31 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 14:54:31 384497dbce3b Verifying Checksum 14:54:31 384497dbce3b Download complete 14:54:31 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 14:54:31 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 14:54:31 eabd8714fec9 Downloading [==========================================> ] 316.8MB/375MB 14:54:31 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 14:54:31 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 14:54:31 055b9255fa03 Verifying Checksum 14:54:31 055b9255fa03 Download complete 14:54:32 90dd78f85976 Extracting [===============> ] 12.78MB/41.49MB 14:54:32 f61a19743345 Extracting [==============================================> ] 3.277MB/3.524MB 14:54:32 b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB 14:54:32 b176d7edde70 Verifying Checksum 14:54:32 b176d7edde70 Download complete 14:54:32 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 14:54:32 9fa9226be034 Downloading [> ] 15.3kB/783kB 14:54:32 f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB 14:54:32 c49e0ee60bfb Downloading [=====================================> ] 80.02MB/107.3MB 14:54:32 9fa9226be034 Downloading [==================================================>] 783kB/783kB 14:54:32 9fa9226be034 Verifying Checksum 14:54:32 9fa9226be034 Download complete 14:54:32 9fa9226be034 Extracting [==> ] 32.77kB/783kB 14:54:32 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 14:54:32 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 14:54:32 1617e25568b2 Download complete 14:54:32 96e38c8865ba Extracting [=========================================> ] 59.6MB/71.91MB 14:54:32 eabd8714fec9 Downloading [============================================> ] 330.9MB/375MB 14:54:32 8e7191d1a9d6 Pull complete 14:54:32 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 14:54:32 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 14:54:32 90dd78f85976 Extracting [==================> ] 15.34MB/41.49MB 14:54:32 f61a19743345 Pull complete 14:54:32 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 14:54:32 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB 14:54:32 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:54:32 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:54:32 c49e0ee60bfb Downloading [============================================> ] 95.7MB/107.3MB 14:54:32 eabd8714fec9 Downloading [=============================================> ] 344.9MB/375MB 14:54:32 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB 14:54:32 96e38c8865ba Extracting [============================================> ] 64.62MB/71.91MB 14:54:32 9fa9226be034 Pull complete 14:54:32 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 14:54:32 90dd78f85976 Extracting [======================> ] 18.32MB/41.49MB 14:54:32 6ac0e4adf315 Downloading [=====> ] 6.487MB/62.07MB 14:54:32 c49e0ee60bfb Verifying Checksum 14:54:32 c49e0ee60bfb Download complete 14:54:32 43449fa9f0bf Pull complete 14:54:32 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 14:54:32 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 14:54:32 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 14:54:32 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB 14:54:32 eabd8714fec9 Downloading [================================================> ] 361.2MB/375MB 14:54:32 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 14:54:32 96e38c8865ba Extracting [===============================================> ] 68.52MB/71.91MB 14:54:32 90dd78f85976 Extracting [=================================> ] 28.11MB/41.49MB 14:54:32 6ac0e4adf315 Downloading [============> ] 15.14MB/62.07MB 14:54:32 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 14:54:32 8af57d8c9f49 Extracting [=====================> ] 3.834MB/8.735MB 14:54:32 eabd8714fec9 Downloading [=================================================> ] 374.1MB/375MB 14:54:32 eabd8714fec9 Verifying Checksum 14:54:32 eabd8714fec9 Download complete 14:54:32 f3b09c502777 Downloading [===> ] 4.324MB/56.52MB 14:54:32 408012a7b118 Downloading [==================================================>] 637B/637B 14:54:32 408012a7b118 Verifying Checksum 14:54:32 408012a7b118 Download complete 14:54:32 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 14:54:32 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 14:54:32 44986281b8b9 Verifying Checksum 14:54:32 44986281b8b9 Download complete 14:54:32 6ac0e4adf315 Downloading [================> ] 20MB/62.07MB 14:54:32 90dd78f85976 Extracting [=====================================> ] 31.1MB/41.49MB 14:54:32 bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB 14:54:32 bf70c5107ab5 Verifying Checksum 14:54:32 bf70c5107ab5 Download complete 14:54:32 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 14:54:32 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 14:54:32 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 14:54:32 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 14:54:32 1ccde423731d Verifying Checksum 14:54:32 1ccde423731d Download complete 14:54:32 8af57d8c9f49 Extracting [=======================================> ] 6.881MB/8.735MB 14:54:32 25fd4437207e Pull complete 14:54:32 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:54:32 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:54:32 7221d93db8a9 Downloading [==================================================>] 100B/100B 14:54:32 7221d93db8a9 Verifying Checksum 14:54:32 7221d93db8a9 Download complete 14:54:32 f3b09c502777 Downloading [============> ] 14.06MB/56.52MB 14:54:32 7df673c7455d Downloading [==================================================>] 694B/694B 14:54:32 7df673c7455d Verifying Checksum 14:54:32 7df673c7455d Download complete 14:54:32 policy-db-migrator Pulled 14:54:32 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:54:32 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:54:34 6ac0e4adf315 Downloading [======================> ] 27.57MB/62.07MB 14:54:34 eabd8714fec9 Extracting [> ] 557.1kB/375MB 14:54:34 90dd78f85976 Extracting [========================================> ] 33.23MB/41.49MB 14:54:34 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 14:54:34 96e38c8865ba Pull complete 14:54:34 96e38c8865ba Pull complete 14:54:34 5e06c6bed798 Extracting [==================================================>] 296B/296B 14:54:34 e5d7009d9e55 Extracting [==================================================>] 295B/295B 14:54:34 e5d7009d9e55 Extracting [==================================================>] 295B/295B 14:54:34 5e06c6bed798 Extracting [==================================================>] 296B/296B 14:54:34 90dd78f85976 Extracting [=========================================> ] 34.5MB/41.49MB 14:54:34 f3b09c502777 Downloading [==============> ] 16.76MB/56.52MB 14:54:34 6ac0e4adf315 Downloading [=======================> ] 29.2MB/62.07MB 14:54:34 eabd8714fec9 Extracting [> ] 5.014MB/375MB 14:54:34 8af57d8c9f49 Pull complete 14:54:34 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 14:54:34 c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB 14:54:34 c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 14:54:34 1617e25568b2 Pull complete 14:54:34 f3b09c502777 Downloading [==========================> ] 30.28MB/56.52MB 14:54:34 6ac0e4adf315 Downloading [=================================> ] 41.63MB/62.07MB 14:54:34 eabd8714fec9 Extracting [=> ] 14.48MB/375MB 14:54:34 90dd78f85976 Extracting [============================================> ] 37.06MB/41.49MB 14:54:34 1e017ebebdbd Downloading [========> ] 6.405MB/37.19MB 14:54:34 5e06c6bed798 Pull complete 14:54:34 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 14:54:34 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 14:54:34 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 14:54:34 e5d7009d9e55 Pull complete 14:54:34 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 14:54:34 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 14:54:34 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 14:54:34 f3b09c502777 Downloading [=======================================> ] 44.33MB/56.52MB 14:54:34 c53a11b7c6fc Pull complete 14:54:34 6ac0e4adf315 Downloading [============================================> ] 55.69MB/62.07MB 14:54:34 eabd8714fec9 Extracting [==> ] 21.73MB/375MB 14:54:34 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 14:54:34 e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 14:54:34 1e017ebebdbd Downloading [===================> ] 14.7MB/37.19MB 14:54:34 90dd78f85976 Extracting [===============================================> ] 39.19MB/41.49MB 14:54:34 6ac0e4adf315 Verifying Checksum 14:54:34 6ac0e4adf315 Download complete 14:54:34 f3b09c502777 Verifying Checksum 14:54:34 f3b09c502777 Download complete 14:54:34 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 14:54:34 82bfc142787e Downloading [> ] 97.22kB/8.613MB 14:54:34 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 14:54:34 684be6598fc9 Pull complete 14:54:34 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 14:54:34 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 14:54:34 1e017ebebdbd Downloading [====================================> ] 27.51MB/37.19MB 14:54:34 1ec5fb03eaee Pull complete 14:54:34 e032d0a5e409 Pull complete 14:54:34 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 14:54:34 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 14:54:34 90dd78f85976 Pull complete 14:54:34 4f4fb700ef54 Extracting [==================================================>] 32B/32B 14:54:34 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 14:54:34 4f4fb700ef54 Extracting [==================================================>] 32B/32B 14:54:34 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 14:54:34 82bfc142787e Downloading [===========================> ] 4.718MB/8.613MB 14:54:34 55f2b468da67 Downloading [=> ] 7.028MB/257.9MB 14:54:34 1e017ebebdbd Verifying Checksum 14:54:34 1e017ebebdbd Download complete 14:54:34 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 14:54:34 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 14:54:34 46baca71a4ef Verifying Checksum 14:54:34 46baca71a4ef Download complete 14:54:34 82bfc142787e Verifying Checksum 14:54:34 82bfc142787e Download complete 14:54:34 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 14:54:34 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 14:54:34 c0c90eeb8aca Verifying Checksum 14:54:34 c0c90eeb8aca Download complete 14:54:34 0d92cad902ba Pull complete 14:54:34 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 14:54:34 5cfb27c10ea5 Verifying Checksum 14:54:34 5cfb27c10ea5 Download complete 14:54:34 eabd8714fec9 Extracting [====> ] 30.08MB/375MB 14:54:34 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB 14:54:34 c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB 14:54:34 40a5eed61bb0 Downloading [==================================================>] 98B/98B 14:54:34 40a5eed61bb0 Verifying Checksum 14:54:34 40a5eed61bb0 Download complete 14:54:34 55f2b468da67 Downloading [===> ] 16.76MB/257.9MB 14:54:34 e040ea11fa10 Downloading [==================================================>] 173B/173B 14:54:34 e040ea11fa10 Verifying Checksum 14:54:34 e040ea11fa10 Download complete 14:54:34 4f4fb700ef54 Pull complete 14:54:34 opa-pdp Pulled 14:54:34 d3165a332ae3 Pull complete 14:54:34 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 14:54:34 b0e0ef7895f4 Downloading [==========> ] 7.536MB/37.01MB 14:54:34 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 14:54:34 eabd8714fec9 Extracting [=====> ] 38.44MB/375MB 14:54:34 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 14:54:34 55f2b468da67 Downloading [=====> ] 30.28MB/257.9MB 14:54:34 c49e0ee60bfb Extracting [=> ] 2.785MB/107.3MB 14:54:34 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 14:54:35 09d5a3f70313 Downloading [==> ] 4.865MB/109.2MB 14:54:35 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 14:54:35 b0e0ef7895f4 Downloading [===================> ] 14.7MB/37.01MB 14:54:35 1e017ebebdbd Extracting [===> ] 2.359MB/37.19MB 14:54:35 eabd8714fec9 Extracting [======> ] 47.35MB/375MB 14:54:35 55f2b468da67 Downloading [========> ] 43.79MB/257.9MB 14:54:35 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB 14:54:35 c49e0ee60bfb Extracting [==> ] 4.456MB/107.3MB 14:54:35 dcc0c3b2850c Extracting [===> ] 5.571MB/76.12MB 14:54:35 09d5a3f70313 Downloading [=====> ] 11.35MB/109.2MB 14:54:35 c124ba1a8b26 Extracting [===> ] 6.685MB/91.87MB 14:54:35 b0e0ef7895f4 Downloading [=================================> ] 24.87MB/37.01MB 14:54:35 1e017ebebdbd Extracting [======> ] 5.112MB/37.19MB 14:54:35 eabd8714fec9 Extracting [=======> ] 54.03MB/375MB 14:54:35 55f2b468da67 Downloading [===========> ] 56.77MB/257.9MB 14:54:35 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 14:54:35 dcc0c3b2850c Extracting [=======> ] 11.14MB/76.12MB 14:54:35 c49e0ee60bfb Extracting [===> ] 7.242MB/107.3MB 14:54:35 09d5a3f70313 Downloading [=========> ] 21.09MB/109.2MB 14:54:35 c124ba1a8b26 Extracting [======> ] 11.7MB/91.87MB 14:54:35 b0e0ef7895f4 Downloading [==============================================> ] 34.67MB/37.01MB 14:54:35 b0e0ef7895f4 Verifying Checksum 14:54:35 b0e0ef7895f4 Download complete 14:54:35 1e017ebebdbd Extracting [=========> ] 7.078MB/37.19MB 14:54:35 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 14:54:35 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 14:54:35 356f5c2c843b Verifying Checksum 14:54:35 eabd8714fec9 Extracting [========> ] 60.72MB/375MB 14:54:35 356f5c2c843b Download complete 14:54:35 55f2b468da67 Downloading [============> ] 65.96MB/257.9MB 14:54:35 6ac0e4adf315 Extracting [==========> ] 13.37MB/62.07MB 14:54:35 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 14:54:35 dcc0c3b2850c Extracting [==========> ] 16.15MB/76.12MB 14:54:35 09d5a3f70313 Downloading [=============> ] 30.28MB/109.2MB 14:54:35 c124ba1a8b26 Extracting [=========> ] 17.83MB/91.87MB 14:54:35 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 14:54:35 c49e0ee60bfb Extracting [====> ] 10.58MB/107.3MB 14:54:35 eabd8714fec9 Extracting [========> ] 67.4MB/375MB 14:54:35 55f2b468da67 Downloading [===============> ] 80.56MB/257.9MB 14:54:35 2d429b9e73a6 Downloading [======> ] 3.538MB/29.13MB 14:54:35 6ac0e4adf315 Extracting [============> ] 15.04MB/62.07MB 14:54:35 dcc0c3b2850c Extracting [=============> ] 21.17MB/76.12MB 14:54:35 09d5a3f70313 Downloading [===================> ] 43.25MB/109.2MB 14:54:35 c124ba1a8b26 Extracting [============> ] 23.4MB/91.87MB 14:54:35 1e017ebebdbd Extracting [================> ] 12.19MB/37.19MB 14:54:35 eabd8714fec9 Extracting [==========> ] 77.43MB/375MB 14:54:35 55f2b468da67 Downloading [==================> ] 94.08MB/257.9MB 14:54:35 c49e0ee60bfb Extracting [======> ] 13.93MB/107.3MB 14:54:35 2d429b9e73a6 Downloading [=================> ] 10.03MB/29.13MB 14:54:35 09d5a3f70313 Downloading [=========================> ] 54.61MB/109.2MB 14:54:35 dcc0c3b2850c Extracting [================> ] 25.62MB/76.12MB 14:54:35 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB 14:54:35 c124ba1a8b26 Extracting [===============> ] 27.85MB/91.87MB 14:54:35 1e017ebebdbd Extracting [====================> ] 15.34MB/37.19MB 14:54:35 55f2b468da67 Downloading [====================> ] 104.9MB/257.9MB 14:54:35 eabd8714fec9 Extracting [===========> ] 84.12MB/375MB 14:54:35 2d429b9e73a6 Downloading [=============================> ] 17.1MB/29.13MB 14:54:35 c49e0ee60bfb Extracting [=======> ] 15.6MB/107.3MB 14:54:35 dcc0c3b2850c Extracting [====================> ] 31.75MB/76.12MB 14:54:35 09d5a3f70313 Downloading [===============================> ] 69.2MB/109.2MB 14:54:35 c124ba1a8b26 Extracting [==================> ] 33.42MB/91.87MB 14:54:35 6ac0e4adf315 Extracting [================> ] 20.61MB/62.07MB 14:54:35 55f2b468da67 Downloading [======================> ] 116.8MB/257.9MB 14:54:35 eabd8714fec9 Extracting [===========> ] 89.69MB/375MB 14:54:35 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 14:54:35 2d429b9e73a6 Downloading [==============================================> ] 26.84MB/29.13MB 14:54:35 dcc0c3b2850c Extracting [========================> ] 37.88MB/76.12MB 14:54:35 2d429b9e73a6 Verifying Checksum 14:54:35 2d429b9e73a6 Download complete 14:54:35 09d5a3f70313 Downloading [====================================> ] 80.02MB/109.2MB 14:54:35 c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB 14:54:35 c124ba1a8b26 Extracting [====================> ] 38.44MB/91.87MB 14:54:35 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 14:54:35 46eab5b44a35 Verifying Checksum 14:54:35 46eab5b44a35 Download complete 14:54:35 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 14:54:35 55f2b468da67 Downloading [========================> ] 128.1MB/257.9MB 14:54:35 eabd8714fec9 Extracting [============> ] 96.37MB/375MB 14:54:35 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 14:54:35 1e017ebebdbd Extracting [=============================> ] 21.63MB/37.19MB 14:54:35 09d5a3f70313 Downloading [=========================================> ] 90.83MB/109.2MB 14:54:35 dcc0c3b2850c Extracting [==============================> ] 45.68MB/76.12MB 14:54:35 c124ba1a8b26 Extracting [========================> ] 45.12MB/91.87MB 14:54:35 c4d302cc468d Verifying Checksum 14:54:35 c4d302cc468d Download complete 14:54:35 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 14:54:35 55f2b468da67 Downloading [===========================> ] 140.6MB/257.9MB 14:54:35 eabd8714fec9 Extracting [=============> ] 100.3MB/375MB 14:54:35 1e017ebebdbd Extracting [=================================> ] 24.77MB/37.19MB 14:54:35 c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 14:54:35 09d5a3f70313 Downloading [==============================================> ] 101.1MB/109.2MB 14:54:35 dcc0c3b2850c Extracting [==================================> ] 52.36MB/76.12MB 14:54:35 6ac0e4adf315 Extracting [====================> ] 25.62MB/62.07MB 14:54:35 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 14:54:35 01e0882c90d9 Verifying Checksum 14:54:35 01e0882c90d9 Download complete 14:54:35 c124ba1a8b26 Extracting [==========================> ] 49.58MB/91.87MB 14:54:35 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 14:54:35 55f2b468da67 Downloading [=============================> ] 153MB/257.9MB 14:54:36 09d5a3f70313 Verifying Checksum 14:54:36 09d5a3f70313 Download complete 14:54:36 eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 14:54:36 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 14:54:36 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 14:54:36 dcc0c3b2850c Extracting [======================================> ] 57.93MB/76.12MB 14:54:36 6ac0e4adf315 Extracting [======================> ] 28.41MB/62.07MB 14:54:36 2d429b9e73a6 Extracting [====> ] 2.359MB/29.13MB 14:54:36 c49e0ee60bfb Extracting [========> ] 18.94MB/107.3MB 14:54:36 531ee2cf3c0c Downloading [==========================> ] 4.341MB/8.066MB 14:54:36 ed54a7dee1d8 Verifying Checksum 14:54:36 ed54a7dee1d8 Download complete 14:54:36 c124ba1a8b26 Extracting [==============================> ] 56.82MB/91.87MB 14:54:36 12c5c803443f Downloading [==================================================>] 116B/116B 14:54:36 12c5c803443f Verifying Checksum 14:54:36 12c5c803443f Download complete 14:54:36 55f2b468da67 Downloading [===============================> ] 162.2MB/257.9MB 14:54:36 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 14:54:36 e27c75a98748 Download complete 14:54:36 531ee2cf3c0c Verifying Checksum 14:54:36 531ee2cf3c0c Download complete 14:54:36 eabd8714fec9 Extracting [==============> ] 107MB/375MB 14:54:36 dcc0c3b2850c Extracting [===========================================> ] 66.29MB/76.12MB 14:54:36 1e017ebebdbd Extracting [=======================================> ] 29.1MB/37.19MB 14:54:36 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 14:54:36 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 14:54:36 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 14:54:36 a83b68436f09 Verifying Checksum 14:54:36 a83b68436f09 Download complete 14:54:36 787d6bee9571 Downloading [==================================================>] 127B/127B 14:54:36 787d6bee9571 Verifying Checksum 14:54:36 787d6bee9571 Download complete 14:54:36 2d429b9e73a6 Extracting [=======> ] 4.424MB/29.13MB 14:54:36 c49e0ee60bfb Extracting [==========> ] 22.28MB/107.3MB 14:54:36 13ff0988aaea Download complete 14:54:36 6ac0e4adf315 Extracting [=========================> ] 31.2MB/62.07MB 14:54:36 c124ba1a8b26 Extracting [==================================> ] 63.5MB/91.87MB 14:54:36 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 14:54:36 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 14:54:36 4b82842ab819 Verifying Checksum 14:54:36 4b82842ab819 Download complete 14:54:36 55f2b468da67 Downloading [================================> ] 168.7MB/257.9MB 14:54:36 7e568a0dc8fb Downloading [==================================================>] 184B/184B 14:54:36 7e568a0dc8fb Verifying Checksum 14:54:36 7e568a0dc8fb Download complete 14:54:36 dcc0c3b2850c Extracting [================================================> ] 73.53MB/76.12MB 14:54:36 e73cb4a42719 Downloading [==> ] 5.946MB/109.1MB 14:54:36 eabd8714fec9 Extracting [==============> ] 110.3MB/375MB 14:54:36 1e017ebebdbd Extracting [===========================================> ] 32.24MB/37.19MB 14:54:36 2d429b9e73a6 Extracting [===========> ] 6.783MB/29.13MB 14:54:36 6ac0e4adf315 Extracting [===========================> ] 34.54MB/62.07MB 14:54:36 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 14:54:36 c49e0ee60bfb Extracting [===========> ] 25.07MB/107.3MB 14:54:36 55f2b468da67 Downloading [==================================> ] 177.3MB/257.9MB 14:54:36 c124ba1a8b26 Extracting [======================================> ] 70.75MB/91.87MB 14:54:36 dcc0c3b2850c Pull complete 14:54:36 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 14:54:36 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 14:54:36 e73cb4a42719 Downloading [======> ] 15.14MB/109.1MB 14:54:36 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB 14:54:36 6ac0e4adf315 Extracting [==================================> ] 42.34MB/62.07MB 14:54:36 eabd8714fec9 Extracting [===============> ] 113.1MB/375MB 14:54:36 2d429b9e73a6 Extracting [===============> ] 9.142MB/29.13MB 14:54:36 c49e0ee60bfb Extracting [============> ] 27.3MB/107.3MB 14:54:36 55f2b468da67 Downloading [====================================> ] 189.8MB/257.9MB 14:54:36 c124ba1a8b26 Extracting [=========================================> ] 76.32MB/91.87MB 14:54:36 e73cb4a42719 Downloading [===========> ] 24.87MB/109.1MB 14:54:36 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 14:54:36 6ac0e4adf315 Extracting [========================================> ] 50.14MB/62.07MB 14:54:36 eabd8714fec9 Extracting [===============> ] 115.9MB/375MB 14:54:36 55f2b468da67 Downloading [=======================================> ] 201.7MB/257.9MB 14:54:36 2d429b9e73a6 Extracting [==================> ] 10.91MB/29.13MB 14:54:36 c49e0ee60bfb Extracting [=============> ] 29.52MB/107.3MB 14:54:36 c124ba1a8b26 Extracting [============================================> ] 81.33MB/91.87MB 14:54:36 e73cb4a42719 Downloading [=================> ] 38.93MB/109.1MB 14:54:36 1e017ebebdbd Extracting [=================================================> ] 36.57MB/37.19MB 14:54:36 6ac0e4adf315 Extracting [============================================> ] 55.15MB/62.07MB 14:54:36 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 14:54:36 55f2b468da67 Downloading [=========================================> ] 214.1MB/257.9MB 14:54:36 2d429b9e73a6 Extracting [=======================> ] 13.57MB/29.13MB 14:54:36 eabd8714fec9 Extracting [===============> ] 118.7MB/375MB 14:54:36 c124ba1a8b26 Extracting [===============================================> ] 87.46MB/91.87MB 14:54:36 c49e0ee60bfb Extracting [===============> ] 33.98MB/107.3MB 14:54:36 e73cb4a42719 Downloading [========================> ] 52.44MB/109.1MB 14:54:36 6ac0e4adf315 Extracting [================================================> ] 60.16MB/62.07MB 14:54:36 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 14:54:36 eb7cda286a15 Pull complete 14:54:36 55f2b468da67 Downloading [===========================================> ] 226MB/257.9MB 14:54:36 2d429b9e73a6 Extracting [===========================> ] 16.22MB/29.13MB 14:54:36 eabd8714fec9 Extracting [================> ] 120.9MB/375MB 14:54:36 c49e0ee60bfb Extracting [================> ] 36.21MB/107.3MB 14:54:36 e73cb4a42719 Downloading [=============================> ] 64.34MB/109.1MB 14:54:36 1e017ebebdbd Pull complete 14:54:36 c124ba1a8b26 Pull complete 14:54:36 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 14:54:36 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 14:54:36 api Pulled 14:54:36 55f2b468da67 Downloading [==============================================> ] 239MB/257.9MB 14:54:36 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 14:54:36 2d429b9e73a6 Extracting [================================> ] 19.17MB/29.13MB 14:54:36 eabd8714fec9 Extracting [================> ] 124.2MB/375MB 14:54:36 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 14:54:36 c49e0ee60bfb Extracting [==================> ] 38.99MB/107.3MB 14:54:36 e73cb4a42719 Downloading [=================================> ] 73.53MB/109.1MB 14:54:36 6ac0e4adf315 Pull complete 14:54:36 55f2b468da67 Downloading [=================================================> ] 253.6MB/257.9MB 14:54:36 2d429b9e73a6 Extracting [======================================> ] 22.41MB/29.13MB 14:54:36 55f2b468da67 Verifying Checksum 14:54:36 55f2b468da67 Download complete 14:54:36 eabd8714fec9 Extracting [=================> ] 127.6MB/375MB 14:54:36 e73cb4a42719 Downloading [=======================================> ] 85.43MB/109.1MB 14:54:36 c49e0ee60bfb Extracting [===================> ] 42.34MB/107.3MB 14:54:36 6394804c2196 Pull complete 14:54:36 pap Pulled 14:54:37 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 14:54:37 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 14:54:37 e73cb4a42719 Downloading [===========================================> ] 95.7MB/109.1MB 14:54:37 eabd8714fec9 Extracting [=================> ] 129.8MB/375MB 14:54:37 c49e0ee60bfb Extracting [=====================> ] 45.68MB/107.3MB 14:54:37 f3b09c502777 Extracting [==> ] 2.785MB/56.52MB 14:54:37 e73cb4a42719 Verifying Checksum 14:54:37 e73cb4a42719 Download complete 14:54:37 55f2b468da67 Extracting [==> ] 10.58MB/257.9MB 14:54:37 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 14:54:37 eabd8714fec9 Extracting [=================> ] 132MB/375MB 14:54:37 c49e0ee60bfb Extracting [======================> ] 49.02MB/107.3MB 14:54:37 f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 14:54:37 55f2b468da67 Extracting [===> ] 17.27MB/257.9MB 14:54:37 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB 14:54:37 eabd8714fec9 Extracting [==================> ] 135.4MB/375MB 14:54:37 c49e0ee60bfb Extracting [========================> ] 52.36MB/107.3MB 14:54:37 f3b09c502777 Extracting [=======> ] 8.356MB/56.52MB 14:54:37 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB 14:54:37 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 14:54:37 c49e0ee60bfb Extracting [=========================> ] 55.15MB/107.3MB 14:54:37 f3b09c502777 Extracting [=========> ] 10.58MB/56.52MB 14:54:37 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB 14:54:37 c49e0ee60bfb Extracting [===========================> ] 57.93MB/107.3MB 14:54:37 eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 14:54:37 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 14:54:37 f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB 14:54:37 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 14:54:37 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 14:54:37 c49e0ee60bfb Extracting [============================> ] 61.83MB/107.3MB 14:54:37 eabd8714fec9 Extracting [===================> ] 144.3MB/375MB 14:54:37 f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB 14:54:37 55f2b468da67 Extracting [======> ] 32.87MB/257.9MB 14:54:37 eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 14:54:37 f3b09c502777 Extracting [===============> ] 17.27MB/56.52MB 14:54:37 55f2b468da67 Extracting [=======> ] 36.77MB/257.9MB 14:54:37 c49e0ee60bfb Extracting [==============================> ] 64.62MB/107.3MB 14:54:37 eabd8714fec9 Extracting [===================> ] 148.7MB/375MB 14:54:37 55f2b468da67 Extracting [=========> ] 47.91MB/257.9MB 14:54:37 c49e0ee60bfb Extracting [===============================> ] 67.4MB/107.3MB 14:54:37 f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 14:54:38 55f2b468da67 Extracting [===========> ] 60.16MB/257.9MB 14:54:38 eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 14:54:38 c49e0ee60bfb Extracting [=================================> ] 71.3MB/107.3MB 14:54:38 f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 14:54:38 2d429b9e73a6 Pull complete 14:54:38 55f2b468da67 Extracting [=============> ] 69.07MB/257.9MB 14:54:38 eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 14:54:38 55f2b468da67 Extracting [=============> ] 71.3MB/257.9MB 14:54:38 f3b09c502777 Extracting [===========================> ] 30.64MB/56.52MB 14:54:38 c49e0ee60bfb Extracting [==================================> ] 74.65MB/107.3MB 14:54:38 eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 14:54:38 55f2b468da67 Extracting [===============> ] 79.1MB/257.9MB 14:54:38 f3b09c502777 Extracting [================================> ] 36.77MB/56.52MB 14:54:38 c49e0ee60bfb Extracting [====================================> ] 77.43MB/107.3MB 14:54:38 eabd8714fec9 Extracting [=====================> ] 164.9MB/375MB 14:54:38 55f2b468da67 Extracting [=================> ] 88.01MB/257.9MB 14:54:38 f3b09c502777 Extracting [============================================> ] 50.14MB/56.52MB 14:54:38 eabd8714fec9 Extracting [======================> ] 169.3MB/375MB 14:54:38 c49e0ee60bfb Extracting [=====================================> ] 80.77MB/107.3MB 14:54:38 55f2b468da67 Extracting [===================> ] 98.6MB/257.9MB 14:54:38 eabd8714fec9 Extracting [=======================> ] 177.1MB/375MB 14:54:38 c49e0ee60bfb Extracting [=======================================> ] 84.67MB/107.3MB 14:54:38 f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 14:54:38 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 14:54:38 55f2b468da67 Extracting [====================> ] 105.8MB/257.9MB 14:54:38 eabd8714fec9 Extracting [========================> ] 185.5MB/375MB 14:54:38 c49e0ee60bfb Extracting [=========================================> ] 88.57MB/107.3MB 14:54:38 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 14:54:38 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 14:54:38 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 14:54:38 eabd8714fec9 Extracting [==========================> ] 196.1MB/375MB 14:54:38 c49e0ee60bfb Extracting [============================================> ] 94.7MB/107.3MB 14:54:38 55f2b468da67 Extracting [======================> ] 116.4MB/257.9MB 14:54:38 eabd8714fec9 Extracting [===========================> ] 203.3MB/375MB 14:54:38 c49e0ee60bfb Extracting [=============================================> ] 98.6MB/107.3MB 14:54:38 55f2b468da67 Extracting [=======================> ] 120.3MB/257.9MB 14:54:38 eabd8714fec9 Extracting [============================> ] 211.1MB/375MB 14:54:39 c49e0ee60bfb Extracting [================================================> ] 103.1MB/107.3MB 14:54:39 55f2b468da67 Extracting [========================> ] 124.2MB/257.9MB 14:54:39 eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 14:54:39 c49e0ee60bfb Extracting [================================================> ] 104.2MB/107.3MB 14:54:39 55f2b468da67 Extracting [========================> ] 128.7MB/257.9MB 14:54:39 eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB 14:54:39 c49e0ee60bfb Extracting [=================================================> ] 105.3MB/107.3MB 14:54:39 f3b09c502777 Pull complete 14:54:39 55f2b468da67 Extracting [=========================> ] 132.6MB/257.9MB 14:54:39 eabd8714fec9 Extracting [==============================> ] 226.7MB/375MB 14:54:39 c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 14:54:39 eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB 14:54:39 55f2b468da67 Extracting [==========================> ] 138.1MB/257.9MB 14:54:39 46eab5b44a35 Pull complete 14:54:39 408012a7b118 Extracting [==================================================>] 637B/637B 14:54:39 408012a7b118 Extracting [==================================================>] 637B/637B 14:54:39 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB 14:54:39 eabd8714fec9 Extracting [===============================> ] 234MB/375MB 14:54:39 55f2b468da67 Extracting [============================> ] 147.1MB/257.9MB 14:54:39 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 14:54:39 eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 14:54:39 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 14:54:39 c4d302cc468d Extracting [==============> ] 1.311MB/4.534MB 14:54:39 eabd8714fec9 Extracting [================================> ] 241.2MB/375MB 14:54:39 c4d302cc468d Extracting [==============================================> ] 4.194MB/4.534MB 14:54:39 55f2b468da67 Extracting [==============================> ] 154.9MB/257.9MB 14:54:39 eabd8714fec9 Extracting [================================> ] 245.1MB/375MB 14:54:39 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 14:54:39 c49e0ee60bfb Pull complete 14:54:39 55f2b468da67 Extracting [==============================> ] 158.2MB/257.9MB 14:54:39 eabd8714fec9 Extracting [=================================> ] 247.9MB/375MB 14:54:40 55f2b468da67 Extracting [===============================> ] 164.3MB/257.9MB 14:54:40 eabd8714fec9 Extracting [=================================> ] 251.8MB/375MB 14:54:40 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 14:54:40 408012a7b118 Pull complete 14:54:40 eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB 14:54:40 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 14:54:40 eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 14:54:40 384497dbce3b Extracting [> ] 557.1kB/63.48MB 14:54:40 eabd8714fec9 Extracting [===================================> ] 266.8MB/375MB 14:54:40 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 14:54:40 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 14:54:40 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 14:54:40 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 14:54:40 384497dbce3b Extracting [> ] 1.114MB/63.48MB 14:54:40 eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 14:54:40 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 14:54:41 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 14:54:41 384497dbce3b Extracting [=> ] 1.671MB/63.48MB 14:54:41 eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB 14:54:41 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB 14:54:41 eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 14:54:41 c4d302cc468d Pull complete 14:54:41 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 14:54:41 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 14:54:41 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 14:54:41 eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 14:54:41 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 14:54:41 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 14:54:42 55f2b468da67 Extracting [==================================> ] 177.7MB/257.9MB 14:54:42 44986281b8b9 Pull complete 14:54:42 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 14:54:42 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 14:54:42 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 14:54:42 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB 14:54:42 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 14:54:42 eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 14:54:42 55f2b468da67 Extracting [===================================> ] 181MB/257.9MB 14:54:42 55f2b468da67 Extracting [===================================> ] 183.3MB/257.9MB 14:54:42 55f2b468da67 Extracting [===================================> ] 184.4MB/257.9MB 14:54:42 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 14:54:42 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 14:54:42 eabd8714fec9 Extracting [====================================> ] 274.6MB/375MB 14:54:42 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 14:54:42 55f2b468da67 Extracting [====================================> ] 187.7MB/257.9MB 14:54:42 eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 14:54:42 384497dbce3b Extracting [===> ] 5.014MB/63.48MB 14:54:42 01e0882c90d9 Pull complete 14:54:42 55f2b468da67 Extracting [=====================================> ] 192.7MB/257.9MB 14:54:42 eabd8714fec9 Extracting [=====================================> ] 280.8MB/375MB 14:54:42 55f2b468da67 Extracting [=====================================> ] 194.4MB/257.9MB 14:54:42 eabd8714fec9 Extracting [=====================================> ] 284.1MB/375MB 14:54:42 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 14:54:42 eabd8714fec9 Extracting [======================================> ] 285.8MB/375MB 14:54:43 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB 14:54:43 384497dbce3b Extracting [======> ] 8.356MB/63.48MB 14:54:43 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 14:54:43 eabd8714fec9 Extracting [======================================> ] 290.8MB/375MB 14:54:43 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 14:54:43 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 14:54:43 eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 14:54:43 bf70c5107ab5 Pull complete 14:54:43 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 14:54:43 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 14:54:43 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 14:54:43 531ee2cf3c0c Extracting [============> ] 1.966MB/8.066MB 14:54:43 eabd8714fec9 Extracting [=======================================> ] 295.2MB/375MB 14:54:43 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 14:54:43 384497dbce3b Extracting [========> ] 11.14MB/63.48MB 14:54:43 531ee2cf3c0c Extracting [===========================> ] 4.424MB/8.066MB 14:54:43 55f2b468da67 Extracting [======================================> ] 198.3MB/257.9MB 14:54:43 531ee2cf3c0c Extracting [===============================> ] 5.014MB/8.066MB 14:54:43 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 14:54:43 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 14:54:43 531ee2cf3c0c Extracting [=======================================> ] 6.39MB/8.066MB 14:54:43 384497dbce3b Extracting [==========> ] 13.37MB/63.48MB 14:54:43 eabd8714fec9 Extracting [=======================================> ] 297.5MB/375MB 14:54:43 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 14:54:43 531ee2cf3c0c Extracting [=================================================> ] 8.061MB/8.066MB 14:54:43 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 14:54:43 384497dbce3b Extracting [============> ] 15.6MB/63.48MB 14:54:43 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 14:54:43 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 14:54:43 eabd8714fec9 Extracting [========================================> ] 300.3MB/375MB 14:54:43 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB 14:54:43 1ccde423731d Pull complete 14:54:44 7221d93db8a9 Extracting [==================================================>] 100B/100B 14:54:44 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 14:54:44 7221d93db8a9 Extracting [==================================================>] 100B/100B 14:54:44 55f2b468da67 Extracting [=======================================> ] 203.9MB/257.9MB 14:54:44 eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 14:54:44 384497dbce3b Extracting [==============> ] 18.38MB/63.48MB 14:54:44 531ee2cf3c0c Pull complete 14:54:44 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB 14:54:44 eabd8714fec9 Extracting [========================================> ] 303MB/375MB 14:54:44 384497dbce3b Extracting [================> ] 20.61MB/63.48MB 14:54:44 eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB 14:54:44 384497dbce3b Extracting [=================> ] 22.84MB/63.48MB 14:54:44 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 14:54:44 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 14:54:44 384497dbce3b Extracting [===================> ] 25.07MB/63.48MB 14:54:44 55f2b468da67 Extracting [========================================> ] 208.9MB/257.9MB 14:54:44 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 14:54:44 384497dbce3b Extracting [=====================> ] 27.85MB/63.48MB 14:54:44 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 14:54:44 7221d93db8a9 Pull complete 14:54:44 7df673c7455d Extracting [==================================================>] 694B/694B 14:54:44 7df673c7455d Extracting [==================================================>] 694B/694B 14:54:44 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 14:54:44 eabd8714fec9 Extracting [=========================================> ] 309.2MB/375MB 14:54:44 384497dbce3b Extracting [=======================> ] 30.08MB/63.48MB 14:54:44 ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB 14:54:44 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 14:54:44 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 14:54:44 7df673c7455d Pull complete 14:54:44 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 14:54:44 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 14:54:44 prometheus Pulled 14:54:44 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB 14:54:44 ed54a7dee1d8 Pull complete 14:54:44 12c5c803443f Extracting [==================================================>] 116B/116B 14:54:44 12c5c803443f Extracting [==================================================>] 116B/116B 14:54:44 eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 14:54:44 55f2b468da67 Extracting [=========================================> ] 213.9MB/257.9MB 14:54:44 384497dbce3b Extracting [==========================> ] 33.42MB/63.48MB 14:54:44 12c5c803443f Pull complete 14:54:44 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 14:54:44 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 14:54:44 eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 14:54:45 55f2b468da67 Extracting [==========================================> ] 216.7MB/257.9MB 14:54:45 384497dbce3b Extracting [============================> ] 36.21MB/63.48MB 14:54:45 eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 14:54:45 55f2b468da67 Extracting [==========================================> ] 220MB/257.9MB 14:54:45 e27c75a98748 Pull complete 14:54:45 384497dbce3b Extracting [==============================> ] 38.99MB/63.48MB 14:54:45 eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 14:54:45 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB 14:54:45 384497dbce3b Extracting [================================> ] 41.78MB/63.48MB 14:54:45 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 14:54:45 eabd8714fec9 Extracting [===========================================> ] 322.5MB/375MB 14:54:45 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB 14:54:45 e73cb4a42719 Extracting [==> ] 5.014MB/109.1MB 14:54:45 384497dbce3b Extracting [==================================> ] 44.01MB/63.48MB 14:54:45 eabd8714fec9 Extracting [===========================================> ] 325.3MB/375MB 14:54:45 e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB 14:54:45 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 14:54:45 384497dbce3b Extracting [====================================> ] 46.79MB/63.48MB 14:54:45 eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 14:54:45 e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB 14:54:45 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB 14:54:45 384497dbce3b Extracting [======================================> ] 49.02MB/63.48MB 14:54:45 eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 14:54:45 e73cb4a42719 Extracting [======> ] 15.04MB/109.1MB 14:54:45 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 14:54:45 384497dbce3b Extracting [========================================> ] 51.25MB/63.48MB 14:54:45 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 14:54:45 e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 14:54:45 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 14:54:45 384497dbce3b Extracting [==========================================> ] 54.03MB/63.48MB 14:54:45 e73cb4a42719 Extracting [=========> ] 21.73MB/109.1MB 14:54:45 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 14:54:45 55f2b468da67 Extracting [=============================================> ] 234.5MB/257.9MB 14:54:46 e73cb4a42719 Extracting [===========> ] 25.07MB/109.1MB 14:54:46 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 14:54:46 eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB 14:54:46 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB 14:54:46 e73cb4a42719 Extracting [============> ] 27.85MB/109.1MB 14:54:46 55f2b468da67 Extracting [==============================================> ] 239.5MB/257.9MB 14:54:46 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 14:54:46 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 14:54:46 e73cb4a42719 Extracting [==============> ] 31.2MB/109.1MB 14:54:46 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 14:54:46 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 14:54:46 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 14:54:46 e73cb4a42719 Extracting [================> ] 35.65MB/109.1MB 14:54:46 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 14:54:46 e73cb4a42719 Extracting [==================> ] 40.11MB/109.1MB 14:54:46 384497dbce3b Pull complete 14:54:46 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 14:54:46 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 14:54:46 55f2b468da67 Extracting [===============================================> ] 245.1MB/257.9MB 14:54:46 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 14:54:46 e73cb4a42719 Extracting [===================> ] 42.89MB/109.1MB 14:54:46 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB 14:54:46 055b9255fa03 Pull complete 14:54:46 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 14:54:46 b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB 14:54:46 e73cb4a42719 Extracting [====================> ] 45.68MB/109.1MB 14:54:46 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB 14:54:46 e73cb4a42719 Extracting [======================> ] 50.14MB/109.1MB 14:54:46 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 14:54:46 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 14:54:46 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 14:54:46 b176d7edde70 Pull complete 14:54:46 grafana Pulled 14:54:46 e73cb4a42719 Extracting [=======================> ] 51.81MB/109.1MB 14:54:47 eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 14:54:47 e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB 14:54:47 55f2b468da67 Pull complete 14:54:47 82bfc142787e Extracting [> ] 98.3kB/8.613MB 14:54:47 e73cb4a42719 Extracting [=========================> ] 55.71MB/109.1MB 14:54:47 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 14:54:47 82bfc142787e Extracting [=========> ] 1.671MB/8.613MB 14:54:47 e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 14:54:47 eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB 14:54:47 82bfc142787e Extracting [=================================================> ] 8.552MB/8.613MB 14:54:47 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 14:54:47 e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 14:54:47 eabd8714fec9 Extracting [===============================================> ] 355.4MB/375MB 14:54:47 82bfc142787e Pull complete 14:54:47 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 14:54:47 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 14:54:47 e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB 14:54:47 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 14:54:47 46baca71a4ef Pull complete 14:54:47 e73cb4a42719 Extracting [=================================> ] 74.09MB/109.1MB 14:54:47 eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 14:54:47 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 14:54:47 e73cb4a42719 Extracting [===================================> ] 77.99MB/109.1MB 14:54:47 eabd8714fec9 Extracting [=================================================> ] 367.7MB/375MB 14:54:47 b0e0ef7895f4 Extracting [=============> ] 10.22MB/37.01MB 14:54:47 e73cb4a42719 Extracting [======================================> ] 83.56MB/109.1MB 14:54:47 eabd8714fec9 Extracting [=================================================> ] 371.6MB/375MB 14:54:47 b0e0ef7895f4 Extracting [===============================> ] 23.2MB/37.01MB 14:54:48 e73cb4a42719 Extracting [=========================================> ] 90.24MB/109.1MB 14:54:48 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 14:54:48 b0e0ef7895f4 Extracting [===============================================> ] 35MB/37.01MB 14:54:48 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 14:54:48 e73cb4a42719 Extracting [==========================================> ] 93.03MB/109.1MB 14:54:48 b0e0ef7895f4 Pull complete 14:54:48 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 14:54:48 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 14:54:48 e73cb4a42719 Extracting [============================================> ] 96.37MB/109.1MB 14:54:48 eabd8714fec9 Pull complete 14:54:48 c0c90eeb8aca Pull complete 14:54:48 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 14:54:48 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 14:54:48 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 14:54:48 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 14:54:48 e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 14:54:48 e73cb4a42719 Extracting [=============================================> ] 100.3MB/109.1MB 14:54:48 45fd2fec8a19 Pull complete 14:54:48 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 14:54:48 5cfb27c10ea5 Pull complete 14:54:48 40a5eed61bb0 Extracting [==================================================>] 98B/98B 14:54:48 40a5eed61bb0 Extracting [==================================================>] 98B/98B 14:54:48 e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 14:54:48 8f10199ed94b Extracting [==> ] 491.5kB/8.768MB 14:54:48 40a5eed61bb0 Pull complete 14:54:48 e040ea11fa10 Extracting [==================================================>] 173B/173B 14:54:48 e040ea11fa10 Extracting [==================================================>] 173B/173B 14:54:48 e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 14:54:48 8f10199ed94b Extracting [================================================> ] 8.552MB/8.768MB 14:54:48 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 14:54:48 8f10199ed94b Pull complete 14:54:48 e040ea11fa10 Pull complete 14:54:48 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 14:54:48 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 14:54:48 e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 14:54:49 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 14:54:49 f963a77d2726 Pull complete 14:54:49 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 14:54:49 09d5a3f70313 Extracting [====> ] 9.47MB/109.2MB 14:54:49 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 14:54:49 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 14:54:49 09d5a3f70313 Extracting [=======> ] 16.71MB/109.2MB 14:54:49 f3a82e9f1761 Extracting [=============> ] 12.39MB/44.41MB 14:54:49 e73cb4a42719 Pull complete 14:54:49 09d5a3f70313 Extracting [==========> ] 23.95MB/109.2MB 14:54:49 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 14:54:49 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 14:54:49 f3a82e9f1761 Extracting [======================> ] 19.73MB/44.41MB 14:54:49 09d5a3f70313 Extracting [===============> ] 33.42MB/109.2MB 14:54:49 a83b68436f09 Pull complete 14:54:49 787d6bee9571 Extracting [==================================================>] 127B/127B 14:54:49 787d6bee9571 Extracting [==================================================>] 127B/127B 14:54:49 f3a82e9f1761 Extracting [=====================================> ] 33.03MB/44.41MB 14:54:49 09d5a3f70313 Extracting [=====================> ] 47.35MB/109.2MB 14:54:49 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 14:54:49 f3a82e9f1761 Pull complete 14:54:49 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 14:54:49 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 14:54:49 787d6bee9571 Pull complete 14:54:49 13ff0988aaea Extracting [==================================================>] 167B/167B 14:54:49 13ff0988aaea Extracting [==================================================>] 167B/167B 14:54:49 09d5a3f70313 Extracting [===========================> ] 60.72MB/109.2MB 14:54:49 79161a3f5362 Pull complete 14:54:49 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 14:54:49 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 14:54:49 09d5a3f70313 Extracting [=================================> ] 74.09MB/109.2MB 14:54:49 13ff0988aaea Pull complete 14:54:49 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 14:54:49 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 14:54:49 09d5a3f70313 Extracting [===================================> ] 76.87MB/109.2MB 14:54:49 9c266ba63f51 Pull complete 14:54:49 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 14:54:49 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 14:54:49 4b82842ab819 Pull complete 14:54:49 7e568a0dc8fb Extracting [==================================================>] 184B/184B 14:54:49 7e568a0dc8fb Extracting [==================================================>] 184B/184B 14:54:49 09d5a3f70313 Extracting [=========================================> ] 90.24MB/109.2MB 14:54:49 2e8a7df9c2ee Pull complete 14:54:49 10f05dd8b1db Extracting [==================================================>] 98B/98B 14:54:49 10f05dd8b1db Extracting [==================================================>] 98B/98B 14:54:49 09d5a3f70313 Extracting [===============================================> ] 103.6MB/109.2MB 14:54:50 7e568a0dc8fb Pull complete 14:54:50 postgres Pulled 14:54:50 09d5a3f70313 Extracting [=================================================> ] 107.5MB/109.2MB 14:54:50 10f05dd8b1db Pull complete 14:54:50 41dac8b43ba6 Extracting [==================================================>] 171B/171B 14:54:50 41dac8b43ba6 Extracting [==================================================>] 171B/171B 14:54:50 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 14:54:50 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 14:54:50 09d5a3f70313 Pull complete 14:54:50 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 14:54:50 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 14:54:50 41dac8b43ba6 Pull complete 14:54:50 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 14:54:50 356f5c2c843b Pull complete 14:54:50 kafka Pulled 14:54:50 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 14:54:50 71a9f6a9ab4d Pull complete 14:54:50 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 14:54:50 da3ed5db7103 Extracting [=====> ] 14.48MB/127.4MB 14:54:50 da3ed5db7103 Extracting [===========> ] 30.08MB/127.4MB 14:54:50 da3ed5db7103 Extracting [==================> ] 46.79MB/127.4MB 14:54:50 da3ed5db7103 Extracting [=========================> ] 64.62MB/127.4MB 14:54:50 da3ed5db7103 Extracting [===============================> ] 80.22MB/127.4MB 14:54:51 da3ed5db7103 Extracting [======================================> ] 98.04MB/127.4MB 14:54:51 da3ed5db7103 Extracting [=============================================> ] 115.3MB/127.4MB 14:54:51 da3ed5db7103 Extracting [===============================================> ] 122MB/127.4MB 14:54:51 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 14:54:51 da3ed5db7103 Pull complete 14:54:51 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 14:54:51 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 14:54:51 c955f6e31a04 Pull complete 14:54:51 zookeeper Pulled 14:54:51 Network compose_default Creating 14:54:51 Network compose_default Created 14:54:51 Container prometheus Creating 14:54:51 Container zookeeper Creating 14:54:51 Container postgres Creating 14:54:59 Container zookeeper Created 14:54:59 Container kafka Creating 14:54:59 Container postgres Created 14:54:59 Container policy-db-migrator Creating 14:54:59 Container prometheus Created 14:54:59 Container grafana Creating 14:54:59 Container policy-db-migrator Created 14:54:59 Container policy-api Creating 14:54:59 Container grafana Created 14:54:59 Container kafka Created 14:54:59 Container policy-api Created 14:54:59 Container policy-pap Creating 14:54:59 Container policy-pap Created 14:54:59 Container policy-opa-pdp Creating 14:54:59 Container policy-opa-pdp Created 14:54:59 Container zookeeper Starting 14:54:59 Container prometheus Starting 14:54:59 Container postgres Starting 14:55:00 Container prometheus Started 14:55:00 Container grafana Starting 14:55:01 Container grafana Started 14:55:01 Container zookeeper Started 14:55:01 Container kafka Starting 14:55:02 Container kafka Started 14:55:03 Container postgres Started 14:55:03 Container policy-db-migrator Starting 14:55:05 Container policy-db-migrator Started 14:55:05 Container policy-api Starting 14:55:05 Container policy-api Started 14:55:05 Container policy-pap Starting 14:55:06 Container policy-pap Started 14:55:06 Container policy-opa-pdp Starting 14:55:07 Container policy-opa-pdp Started 14:55:07 Prometheus server: http://localhost:30259 14:55:07 Grafana server: http://localhost:30269 14:55:07 Waiting 3 minutes for OPA-PDP to start... 14:58:07 Checking if REST port 30003 is open on localhost ... 14:58:07 IMAGE NAMES STATUS 14:58:07 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 14:58:07 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 14:58:07 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 14:58:07 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 14:58:07 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 14:58:07 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 14:58:07 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 14:58:07 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 14:58:07 Checking if REST port 30012 is open on localhost ... 14:58:07 IMAGE NAMES STATUS 14:58:07 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 14:58:07 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 14:58:07 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 14:58:07 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 14:58:07 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 14:58:07 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 14:58:07 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 14:58:07 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 14:58:07 Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/resources/tests/models'... 14:58:08 Building robot framework docker image 14:58:44 sha256:871808461706c30308afc4ef28088776df50c021fd6634940821e122f34e8271 14:58:49 top - 14:58:49 up 6 min, 0 users, load average: 1.08, 1.17, 0.60 14:58:49 Tasks: 220 total, 1 running, 149 sleeping, 0 stopped, 0 zombie 14:58:49 %Cpu(s): 11.0 us, 2.7 sy, 0.0 ni, 83.1 id, 3.0 wa, 0.0 hi, 0.1 si, 0.1 st 14:58:49 14:58:49 total used free shared buff/cache available 14:58:49 Mem: 31G 2.4G 21G 28M 7.3G 28G 14:58:49 Swap: 1.0G 0B 1.0G 14:58:49 14:58:49 IMAGE NAMES STATUS 14:58:49 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 14:58:49 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 14:58:49 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 14:58:49 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 14:58:49 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 14:58:49 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 14:58:49 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 14:58:49 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 14:58:49 14:58:51 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:58:51 5cb4c6151f1b policy-opa-pdp 0.20% 12MiB / 31.41GiB 0.04% 74.4kB / 70.3kB 0B / 0B 21 14:58:51 ffc816082721 policy-pap 0.70% 568.7MiB / 31.41GiB 1.77% 1.7MB / 993kB 0B / 139MB 69 14:58:51 f489929dab89 policy-api 0.11% 400.8MiB / 31.41GiB 1.25% 1.15MB / 1.05MB 0B / 4.1kB 57 14:58:51 c92e4d85b695 grafana 0.16% 101.8MiB / 31.41GiB 0.32% 15.8MB / 201kB 0B / 30.4MB 19 14:58:51 bbf4536db998 kafka 2.53% 401.8MiB / 31.41GiB 1.25% 286kB / 273kB 0B / 680kB 83 14:58:51 0078d73c1801 zookeeper 0.11% 83.89MiB / 31.41GiB 0.26% 57.8kB / 49.4kB 0B / 393kB 62 14:58:51 4bedbe850124 prometheus 0.09% 21.59MiB / 31.41GiB 0.07% 203kB / 9.27kB 4.1kB / 0B 13 14:58:51 1d93050236e9 postgres 0.00% 87.92MiB / 31.41GiB 0.27% 2.33MB / 3.23MB 0B / 158MB 26 14:58:51 14:58:51 Container policy-csit Creating 14:58:51 Container policy-csit Created 14:58:51 Attaching to policy-csit 14:58:52 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 14:58:52 policy-csit | Run Robot test 14:58:52 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 14:58:52 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 14:58:52 policy-csit | -v POLICY_API_IP:policy-api:6969 14:58:52 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 14:58:52 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 14:58:52 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 14:58:52 policy-csit | -v APEX_IP:policy-apex-pdp:6969 14:58:52 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 14:58:52 policy-csit | -v KAFKA_IP:kafka:9092 14:58:52 policy-csit | -v PROMETHEUS_IP:prometheus:9090 14:58:52 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 14:58:52 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 14:58:52 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 14:58:52 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 14:58:52 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 14:58:52 policy-csit | -v TEMP_FOLDER:/tmp/distribution 14:58:52 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 14:58:52 policy-csit | -v TEST_ENV:docker 14:58:52 policy-csit | -v JAEGER_IP:jaeger:16686 14:58:52 policy-csit | Starting Robot test suites ... 14:58:53 policy-csit | ============================================================================== 14:58:53 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 14:58:53 policy-csit | ============================================================================== 14:58:53 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 14:58:53 policy-csit | ============================================================================== 14:58:53 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 14:58:53 policy-csit | ------------------------------------------------------------------------------ 14:58:53 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 14:58:53 policy-csit | ------------------------------------------------------------------------------ 14:59:19 policy-csit | ValidatesZonePolicy | PASS | 14:59:19 policy-csit | ------------------------------------------------------------------------------ 14:59:45 policy-csit | ValidatesVehiclePolicy | PASS | 14:59:45 policy-csit | ------------------------------------------------------------------------------ 15:00:11 policy-csit | ValidatesAbacPolicy | PASS | 15:00:11 policy-csit | ------------------------------------------------------------------------------ 15:00:11 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 15:00:11 policy-csit | 5 tests, 5 passed, 0 failed 15:00:11 policy-csit | ============================================================================== 15:00:11 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 15:00:11 policy-csit | ============================================================================== 15:01:11 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 15:01:11 policy-csit | ------------------------------------------------------------------------------ 15:01:11 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 15:01:11 policy-csit | ------------------------------------------------------------------------------ 15:01:11 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 15:01:11 policy-csit | ------------------------------------------------------------------------------ 15:01:11 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 15:01:11 policy-csit | ------------------------------------------------------------------------------ 15:01:11 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 15:01:11 policy-csit | ------------------------------------------------------------------------------ 15:01:11 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 15:01:11 policy-csit | 5 tests, 5 passed, 0 failed 15:01:11 policy-csit | ============================================================================== 15:01:11 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 15:01:11 policy-csit | 10 tests, 10 passed, 0 failed 15:01:11 policy-csit | ============================================================================== 15:01:11 policy-csit | Output: /tmp/results/output.xml 15:01:11 policy-csit | Log: /tmp/results/log.html 15:01:11 policy-csit | Report: /tmp/results/report.html 15:01:11 policy-csit | RESULT: 0 15:01:11 policy-csit exited with code 0 15:01:11 IMAGE NAMES STATUS 15:01:11 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes 15:01:11 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes 15:01:11 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes 15:01:11 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes 15:01:11 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes 15:01:11 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes 15:01:11 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes 15:01:11 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes 15:01:11 Shut down started! 15:01:13 Collecting logs from docker compose containers... 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.86858748Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T14:55:01Z 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869100394Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869115204Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869120224Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869124714Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869128234Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869132684Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869139824Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869148414Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869153355Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869158665Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869170345Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869174215Z level=info msg=Target target=[all] 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869190625Z level=info msg="Path Home" path=/usr/share/grafana 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869195955Z level=info msg="Path Data" path=/var/lib/grafana 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869203885Z level=info msg="Path Logs" path=/var/log/grafana 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869207615Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869214425Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 15:01:13 grafana | logger=settings t=2025-06-16T14:55:01.869218065Z level=info msg="App mode production" 15:01:13 grafana | logger=featuremgmt t=2025-06-16T14:55:01.86987491Z level=info msg=FeatureToggles newPDFRendering=true pluginsDetailsRightPanel=true logsContextDatasourceUi=true promQLScope=true tlsMemcached=true kubernetesClientDashboardsFolders=true lokiStructuredMetadata=true ssoSettingsSAML=true recoveryThreshold=true lokiQuerySplitting=true preinstallAutoUpdate=true unifiedStorageSearchPermissionFiltering=true alertingApiServer=true logRowsPopoverMenu=true newDashboardSharingComponent=true recordedQueriesMulti=true grafanaconThemes=true alertingQueryAndExpressionsStepMode=true nestedFolders=true dataplaneFrontendFallback=true dashgpt=true alertingUIOptimizeReducer=true formatString=true alertingRulePermanentlyDelete=true angularDeprecationUI=true pinNavItems=true cloudWatchNewLabelParsing=true onPremToCloudMigrations=true useSessionStorageForRedirection=true panelMonitoring=true publicDashboardsScene=true awsAsyncQueryCaching=true alertingRuleRecoverDeleted=true logsInfiniteScrolling=true failWrongDSUID=true kubernetesPlaylists=true lokiLabelNamesQueryApi=true alertingRuleVersionHistoryRestore=true lokiQueryHints=true cloudWatchCrossAccountQuerying=true alertingNotificationsStepMode=true alertingInsights=true influxdbBackendMigration=true azureMonitorPrometheusExemplars=true unifiedRequestLog=true alertRuleRestore=true logsExploreTableVisualisation=true ssoSettingsApi=true prometheusAzureOverrideAudience=true dashboardSceneForViewers=true reportingUseRawTimeRange=true annotationPermissionUpdate=true prometheusUsesCombobox=true transformationsRedesign=true logsPanelControls=true azureMonitorEnableUserAuth=true addFieldFromCalculationStatFunctions=true alertingSimplifiedRouting=true cloudWatchRoundUpEndTime=true groupToNestedTableTransformation=true dashboardScene=true externalCorePlugins=true correlations=true dashboardSceneSolo=true newFiltersUI=true 15:01:13 grafana | logger=sqlstore t=2025-06-16T14:55:01.869975291Z level=info msg="Connecting to DB" dbtype=sqlite3 15:01:13 grafana | logger=sqlstore t=2025-06-16T14:55:01.870008581Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.872566712Z level=info msg="Locking database" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.872593223Z level=info msg="Starting DB migrations" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.873279548Z level=info msg="Executing migration" id="create migration_log table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.874183766Z level=info msg="Migration successfully executed" id="create migration_log table" duration=903.498µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.87829772Z level=info msg="Executing migration" id="create user table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.878842064Z level=info msg="Migration successfully executed" id="create user table" duration=543.924µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.883669374Z level=info msg="Executing migration" id="add unique index user.login" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.884207748Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=537.894µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.887249393Z level=info msg="Executing migration" id="add unique index user.email" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.887753458Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=503.985µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.890984625Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.891990333Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.008058ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.897056835Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.897869921Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=814.396µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.900789175Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.903252246Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.462531ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.906297581Z level=info msg="Executing migration" id="create user table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.907141698Z level=info msg="Migration successfully executed" id="create user table v2" duration=845.267µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.911425874Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.912177969Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=752.245µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.915591357Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.916660047Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.06043ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.920433108Z level=info msg="Executing migration" id="copy data_source v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.921009672Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=576.314µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.925418649Z level=info msg="Executing migration" id="Drop old table user_v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.926035114Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=616.905µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.929528082Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.931472949Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.945827ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.93525218Z level=info msg="Executing migration" id="Update user table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.93531272Z level=info msg="Migration successfully executed" id="Update user table charset" duration=63.35µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.939149663Z level=info msg="Executing migration" id="Add last_seen_at column to user" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.940520383Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.37136ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.9448862Z level=info msg="Executing migration" id="Add missing user data" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.945116432Z level=info msg="Migration successfully executed" id="Add missing user data" duration=230.052µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.948643301Z level=info msg="Executing migration" id="Add is_disabled column to user" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.950332935Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.688804ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.953953515Z level=info msg="Executing migration" id="Add index user.login/user.email" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.954894932Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=941.397µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.958277511Z level=info msg="Executing migration" id="Add is_service_account column to user" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.95946991Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.190049ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.963521534Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.972346297Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.820323ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.976115308Z level=info msg="Executing migration" id="Add uid column to user" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.977141066Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.025528ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.980560365Z level=info msg="Executing migration" id="Update uid column values for users" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.980912938Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=354.833µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.985305383Z level=info msg="Executing migration" id="Add unique index user_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.986604814Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.299701ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.991249622Z level=info msg="Executing migration" id="Add is_provisioned column to user" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.99335811Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.108518ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.996902499Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:01.997518555Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=616.786µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.001750389Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.002413655Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=663.686µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.006864831Z level=info msg="Executing migration" id="update login and email fields to lowercase" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.007468846Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=605.725µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.011266628Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.011983663Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=716.475µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.015734193Z level=info msg="Executing migration" id="create temp user table v1-7" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.016989714Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.258141ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.020714154Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.021598771Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=888.437µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.026471361Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.027280827Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=809.086µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.031031908Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.031753524Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=723.286µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.035333702Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.036053088Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=718.386µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.040217382Z level=info msg="Executing migration" id="Update temp_user table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.040250182Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=30.53µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.042946294Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.043843571Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=897.327µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.047208048Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.047865414Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=656.866µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.051989397Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.052766093Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=776.296µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.056258062Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.057069159Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=804.747µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.060532297Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.062904686Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.371849ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.067290342Z level=info msg="Executing migration" id="create temp_user v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.067913417Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=625.835µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.071194223Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.071909759Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=713.166µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.075381737Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.076533577Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.15067ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.081093464Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.082211263Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.117159ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.085941143Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.087061113Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.12097ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.091543828Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.092444316Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=903.008µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.096161116Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.097122864Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=959.718µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.101687141Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.102092924Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=404.763µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.10532824Z level=info msg="Executing migration" id="create star table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.106038566Z level=info msg="Migration successfully executed" id="create star table" duration=710.376µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.109519085Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.110713114Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.190719ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.11515362Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.11753708Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.38237ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.121096968Z level=info msg="Executing migration" id="Add column org_id in star" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.123452598Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.35421ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.126797394Z level=info msg="Executing migration" id="Add column updated in star" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.128151065Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.351321ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.13121191Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.131967127Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=754.497µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.13605946Z level=info msg="Executing migration" id="create org table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.136810376Z level=info msg="Migration successfully executed" id="create org table v1" duration=751.456µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.140044993Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.140743108Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=697.355µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.144108325Z level=info msg="Executing migration" id="create org_user table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.145201914Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.092559ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.148695593Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.149835021Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.138658ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.153938885Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.154730151Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=790.326µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.157941027Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.158693143Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=748.636µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.16206974Z level=info msg="Executing migration" id="Update org table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.162106571Z level=info msg="Migration successfully executed" id="Update org table charset" duration=37.351µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.165904302Z level=info msg="Executing migration" id="Update org_user table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.165941552Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=36.92µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.170801172Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.171120124Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=319.022µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.174474541Z level=info msg="Executing migration" id="create dashboard table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.175697261Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.22196ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.17914611Z level=info msg="Executing migration" id="add index dashboard.account_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.179939796Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=793.116µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.18420893Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.185008877Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=799.187µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.189222741Z level=info msg="Executing migration" id="create dashboard_tag table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.19029509Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.068619ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.19398496Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.19531695Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.33069ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.19889458Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.199558545Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=666.105µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.20397702Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.212269548Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.290198ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.215725107Z level=info msg="Executing migration" id="create dashboard v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.216262031Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=536.165µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.219748409Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.220259973Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=511.014µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.224440147Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.225737297Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.29908ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.229221345Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.22976447Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=542.885µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.233123677Z level=info msg="Executing migration" id="drop table dashboard_v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.234352347Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.2262ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.239003055Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.239018455Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=15.42µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.243829594Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.245631788Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.800234ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.248930895Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.25074192Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.810325ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.254641762Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.257048731Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.402549ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.260386519Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.261611499Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.22446ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.265992674Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.267866899Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.873695ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.27165601Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.272381156Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=724.336µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.275521261Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.276230427Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=708.896µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.279352192Z level=info msg="Executing migration" id="Update dashboard table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.279373512Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=21.81µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.283339295Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.283364865Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=25µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.286326269Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.288277394Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.950775ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.291522661Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.293489967Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.966826ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.296627762Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.298629448Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.001176ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.302670462Z level=info msg="Executing migration" id="Add column uid in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.304550076Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.879214ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.307667991Z level=info msg="Executing migration" id="Update uid column values in dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.307882143Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=213.972µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.310986479Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.311707565Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=718.956µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.315938959Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.317040379Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.098509ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.320505526Z level=info msg="Executing migration" id="Update dashboard title length" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.320544957Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.091µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.323920824Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.32470948Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=787.896µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.328685893Z level=info msg="Executing migration" id="create dashboard_provisioning" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.329349298Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=662.845µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.332505613Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.341243634Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=8.743521ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.344415221Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.344931765Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=516.404µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.348958287Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.349628052Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=666.625µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.353660425Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.354978565Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.31748ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.358280012Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.358613076Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=332.654µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.362707888Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.363224992Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=516.914µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.36665021Z level=info msg="Executing migration" id="Add check_sum column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.369937737Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.286567ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.373319835Z level=info msg="Executing migration" id="Add index for dashboard_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.374464964Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.144599ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.378669079Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.37883284Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=163.641µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.382449309Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.38264135Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=191.901µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.386870235Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.388099755Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.22868ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.398924712Z level=info msg="Executing migration" id="Add isPublic for dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.401583034Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.661332ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.404438897Z level=info msg="Executing migration" id="Add deleted for dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.406736326Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.297359ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.40974301Z level=info msg="Executing migration" id="Add index for deleted" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.410682768Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=939.418µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.417990437Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.421915759Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=3.924282ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.424994893Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.427391633Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.39426ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.430395358Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.430828231Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=432.853µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.435700481Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.43803171Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.331009ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.441060584Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.441869791Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=812.897µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.446110556Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.446571159Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=460.543µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.449611894Z level=info msg="Executing migration" id="create data_source table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.450565211Z level=info msg="Migration successfully executed" id="create data_source table" duration=952.077µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.45400756Z level=info msg="Executing migration" id="add index data_source.account_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.454980247Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=974.897µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.457922291Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.458663527Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=741.676µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.462995432Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.463640867Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=645.395µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.466221838Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.466863773Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=641.715µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.471816974Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.480861647Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.044843ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.483983162Z level=info msg="Executing migration" id="create data_source table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.485018381Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.037489ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.487944565Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.488848542Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=903.267µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.493782292Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.49478241Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=998.358µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.498220278Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.499146676Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=925.678µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.502809025Z level=info msg="Executing migration" id="Add column with_credentials" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.505626308Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.817333ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.51945677Z level=info msg="Executing migration" id="Add secure json data column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.523482133Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.029283ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.527067202Z level=info msg="Executing migration" id="Update data_source table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.527112753Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=161.642µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.530384009Z level=info msg="Executing migration" id="Update initial version to 1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.530681501Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=296.982µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.535639262Z level=info msg="Executing migration" id="Add read_only data column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.538244243Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.604251ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.542471317Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.542763509Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=291.502µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.547226466Z level=info msg="Executing migration" id="Update json_data with nulls" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.547485768Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=258.802µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.553599707Z level=info msg="Executing migration" id="Add uid column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.556590072Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.991455ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.560108481Z level=info msg="Executing migration" id="Update uid value" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.560381433Z level=info msg="Migration successfully executed" id="Update uid value" duration=270.762µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.562995464Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.563885121Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=889.447µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.570472544Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.571910336Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.436802ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.575851518Z level=info msg="Executing migration" id="Add is_prunable column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.580452185Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.599037ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.585076472Z level=info msg="Executing migration" id="Add api_version column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.58841484Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.335888ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.594520939Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.59454348Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=23.631µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.598839514Z level=info msg="Executing migration" id="create api_key table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.599863433Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.021779ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.610161846Z level=info msg="Executing migration" id="add index api_key.account_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.611651889Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.491953ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.616182776Z level=info msg="Executing migration" id="add index api_key.key" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.617836989Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.653253ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.622399146Z level=info msg="Executing migration" id="add index api_key.account_id_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.624049389Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.650753ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.62779748Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.628474145Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=676.345µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.633298594Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.634374833Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.074449ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.638599388Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.639822947Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.22326ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.644479905Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.652588031Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.105936ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.656005868Z level=info msg="Executing migration" id="create api_key table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.656858065Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=853.657µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.659879469Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.660754986Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=875.317µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.665524926Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.666434483Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=909.558µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.66965009Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.671468614Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.817504ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.675584378Z level=info msg="Executing migration" id="copy api_key v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.676223683Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=639.554µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.681235603Z level=info msg="Executing migration" id="Drop old table api_key_v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.681952869Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=716.636µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.685152455Z level=info msg="Executing migration" id="Update api_key table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.685178625Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.63µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.6883148Z level=info msg="Executing migration" id="Add expires to api_key table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.692799187Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.483257ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.698171411Z level=info msg="Executing migration" id="Add service account foreign key" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.70294323Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.83135ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.706417167Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.706831511Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=413.654µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.709974546Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.713191922Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.213356ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.719966507Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.722952112Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.988855ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.726828523Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.72766721Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=838.287µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.731169119Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.731825474Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=655.825µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.737428949Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.73887031Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.440691ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.742311449Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.74366728Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.355601ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.746979336Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.747786463Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=806.667µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.752834404Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.753787102Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=954.708µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.757395331Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.757424831Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=30.83µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.7609672Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.76101466Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=47.9µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.766707176Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.769852261Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.143795ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.772905426Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.776341615Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.428589ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.779830183Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.779852843Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=23.14µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.783829775Z level=info msg="Executing migration" id="create quota table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.784787174Z level=info msg="Migration successfully executed" id="create quota table v1" duration=956.609µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.790138527Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.791134875Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=995.568µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.794145279Z level=info msg="Executing migration" id="Update quota table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.794171149Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=26.47µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.797228154Z level=info msg="Executing migration" id="create plugin_setting table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.798114671Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=885.757µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.803640636Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.804541604Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=896.508µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.807638279Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.810771264Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.132225ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.813758228Z level=info msg="Executing migration" id="Update plugin_setting table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.813786238Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.47µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.819129572Z level=info msg="Executing migration" id="update NULL org_id to 1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.819462444Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=332.022µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.822845382Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.837520311Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=14.68266ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.841158901Z level=info msg="Executing migration" id="create session table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.842018698Z level=info msg="Migration successfully executed" id="create session table" duration=859.297µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.846409854Z level=info msg="Executing migration" id="Drop old table playlist table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.846637845Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=227.271µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.850372725Z level=info msg="Executing migration" id="Drop old table playlist_item table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.850579377Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=205.452µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.854145826Z level=info msg="Executing migration" id="create playlist table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.855372656Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.22564ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.860312716Z level=info msg="Executing migration" id="create playlist item table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.861645137Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.331991ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.865392088Z level=info msg="Executing migration" id="Update playlist table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.865435118Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=43.43µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.869169068Z level=info msg="Executing migration" id="Update playlist_item table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.869209998Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=41.62µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.872535935Z level=info msg="Executing migration" id="Add playlist column created_at" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.875807871Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.270906ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.879952856Z level=info msg="Executing migration" id="Add playlist column updated_at" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.883198132Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.247706ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.886587249Z level=info msg="Executing migration" id="drop preferences table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.88677557Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=187.461µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.890222328Z level=info msg="Executing migration" id="drop preferences table v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.89041227Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=187.472µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.894983797Z level=info msg="Executing migration" id="create preferences table v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.895900784Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=916.317µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.899529284Z level=info msg="Executing migration" id="Update preferences table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.899576815Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=46.051µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.903344685Z level=info msg="Executing migration" id="Add column team_id in preferences" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.90751285Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.164985ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.912135886Z level=info msg="Executing migration" id="Update team_id column values in preferences" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.912346538Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=210.232µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.91508476Z level=info msg="Executing migration" id="Add column week_start in preferences" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.91863307Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.54729ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.922193398Z level=info msg="Executing migration" id="Add column preferences.json_data" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.925576536Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.382228ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.929250295Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.929271445Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=22.37µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.933391939Z level=info msg="Executing migration" id="Add preferences index org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.934275096Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=882.827µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.937604833Z level=info msg="Executing migration" id="Add preferences index user_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.938648732Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.043449ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.94210187Z level=info msg="Executing migration" id="create alert table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.943165718Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.062988ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.947556384Z level=info msg="Executing migration" id="add index alert org_id & id " 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.948458751Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=901.987µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.951828768Z level=info msg="Executing migration" id="add index alert state" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.95325285Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.423422ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.957037781Z level=info msg="Executing migration" id="add index alert dashboard_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.958539953Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.501382ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.962942029Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.963717646Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=774.797µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.967003592Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.967910769Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=906.987µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.970997214Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.971910882Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=913.048µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.977027373Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.991379869Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.346666ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.997145347Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:02.998469657Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.32513ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.001961125Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.002964854Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.003309ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.006079281Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.006470384Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=390.394µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.011717028Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.012349103Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=631.155µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.015358729Z level=info msg="Executing migration" id="create alert_notification table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.016342827Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=981.218µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.019362942Z level=info msg="Executing migration" id="Add column is_default" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.023137875Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.775053ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.028143807Z level=info msg="Executing migration" id="Add column frequency" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.031865369Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.720792ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.035037125Z level=info msg="Executing migration" id="Add column send_reminder" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.038756706Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.718701ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.041949794Z level=info msg="Executing migration" id="Add column disable_resolve_message" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.045715956Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.765312ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.050634338Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.051559825Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=925.067µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.054555551Z level=info msg="Executing migration" id="Update alert table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.054581841Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.03µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.057654857Z level=info msg="Executing migration" id="Update alert_notification table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.057680767Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=26.61µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.060720562Z level=info msg="Executing migration" id="create notification_journal table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.061587481Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=863.688µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.066195079Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.067181218Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=985.739µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.070424455Z level=info msg="Executing migration" id="drop alert_notification_journal" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.071262642Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=837.757µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.076789999Z level=info msg="Executing migration" id="create alert_notification_state table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.077688656Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=896.617µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.081305657Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.082927671Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.620523ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.08645657Z level=info msg="Executing migration" id="Add for to alert table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.090395294Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.939044ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.095243535Z level=info msg="Executing migration" id="Add column uid in alert_notification" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.099137988Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.893813ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.102747148Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.103040621Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=292.963µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.106273058Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.107348247Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.074829ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.112520031Z level=info msg="Executing migration" id="Remove unique index org_id_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.113499979Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=979.848µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.116731027Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.12064057Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.908853ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.123872797Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.123894818Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=19.691µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.129272223Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.130253031Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=980.618µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.133422828Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.135008132Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.584254ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.138452331Z level=info msg="Executing migration" id="Drop old annotation table v4" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.138772283Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=319.332µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.14193759Z level=info msg="Executing migration" id="create annotation table v5" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.142994259Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.055449ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.148581597Z level=info msg="Executing migration" id="add index annotation 0 v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.149627655Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.154589ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.153075364Z level=info msg="Executing migration" id="add index annotation 1 v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.154656978Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.580884ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.159678781Z level=info msg="Executing migration" id="add index annotation 2 v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.160621538Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=942.617µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.163787605Z level=info msg="Executing migration" id="add index annotation 3 v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.164800183Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.011948ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.167837319Z level=info msg="Executing migration" id="add index annotation 4 v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.168780927Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=943.168µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.173850381Z level=info msg="Executing migration" id="Update annotation table charset" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.173876981Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.38µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.177004017Z level=info msg="Executing migration" id="Add column region_id to annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.183602273Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.597676ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.187443415Z level=info msg="Executing migration" id="Drop category_id index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.188371513Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=925.348µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.193454926Z level=info msg="Executing migration" id="Add column tags to annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.200073622Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.616556ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.203530491Z level=info msg="Executing migration" id="Create annotation_tag table v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.204102246Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=571.445µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.207243783Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.20801727Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=772.726µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.213443725Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.214859817Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.420022ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.218369097Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.230408639Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.039052ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.233639366Z level=info msg="Executing migration" id="Create annotation_tag table v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.234274891Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=634.955µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.239177873Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.240268522Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.089759ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.243492829Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.243891943Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=398.524µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.246923659Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.247574834Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=646.405µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.252787098Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.253098881Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=306.853µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.256013626Z level=info msg="Executing migration" id="Add created time to annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.262717962Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.699076ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.265962439Z level=info msg="Executing migration" id="Add updated time to annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.269021606Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.058767ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.271990351Z level=info msg="Executing migration" id="Add index for created in annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.272786838Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=796.007µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.277703519Z level=info msg="Executing migration" id="Add index for updated in annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.278853139Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.14937ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.282067196Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.282417509Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=349.633µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.285487704Z level=info msg="Executing migration" id="Add epoch_end column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.290163884Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.67574ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.29558609Z level=info msg="Executing migration" id="Add index for epoch_end" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.29671857Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.13208ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.300360041Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.300736894Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=375.453µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.304335594Z level=info msg="Executing migration" id="Move region to single row" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.305131211Z level=info msg="Migration successfully executed" id="Move region to single row" duration=795.227µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.308420439Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.309478718Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.058709ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.314022217Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.315062665Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.040198ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.318123651Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.319289032Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.167771ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.324348514Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.325627125Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.278591ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.329044004Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.330483267Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.438823ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.333913536Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.335107355Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.19413ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.339368682Z level=info msg="Executing migration" id="Increase tags column to length 4096" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.339405482Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=40.92µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.343395475Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.343425626Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=31.681µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.348727371Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.348754621Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=29.11µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.352044568Z level=info msg="Executing migration" id="create test_data table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.353419561Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.374653ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.356518087Z level=info msg="Executing migration" id="create dashboard_version table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.357359014Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=840.757µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.360391369Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.361328788Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=937.139µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.365699564Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.366633152Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=933.288µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.369520576Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.369721789Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=201.013µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.372688284Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.373224268Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=539.244µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.378342462Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.378370232Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=28.64µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.381526788Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.389108022Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=7.578224ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.392619512Z level=info msg="Executing migration" id="create team table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.39352452Z level=info msg="Migration successfully executed" id="create team table" duration=880.997µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.398708354Z level=info msg="Executing migration" id="add index team.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.399684732Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=976.648µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.402896309Z level=info msg="Executing migration" id="add unique index team_org_id_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.403835537Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=938.588µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.406918203Z level=info msg="Executing migration" id="Add column uid in team" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.411783725Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.864542ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.41719515Z level=info msg="Executing migration" id="Update uid column values in team" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.417405722Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=211.132µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.420524719Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.421528377Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.002908ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.425065507Z level=info msg="Executing migration" id="Add column external_uid in team" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.43019784Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.131373ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.433348297Z level=info msg="Executing migration" id="Add column is_provisioned in team" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.438541891Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=5.190624ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.444486221Z level=info msg="Executing migration" id="create team member table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.445621312Z level=info msg="Migration successfully executed" id="create team member table" duration=1.13683ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.448895079Z level=info msg="Executing migration" id="add index team_member.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.450120729Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.22647ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.453621148Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.454809139Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.187721ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.460206495Z level=info msg="Executing migration" id="add index team_member.team_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.461510685Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.30618ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.465068646Z level=info msg="Executing migration" id="Add column email to team table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.473395347Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.320921ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.477628692Z level=info msg="Executing migration" id="Add column external to team_member table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.481640105Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.007563ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.487327185Z level=info msg="Executing migration" id="Add column permission to team_member table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.492911812Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.587747ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.496401031Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.498360907Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.959596ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.502087019Z level=info msg="Executing migration" id="create dashboard acl table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.50340619Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.319161ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.509090388Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.51044946Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.358842ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.513840489Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.51524616Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.405092ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.51867291Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.519537227Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=863.547µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.524741471Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.525627288Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=884.447µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.529108948Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.53051794Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.380472ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.533833188Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.535355191Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.520353ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.540687577Z level=info msg="Executing migration" id="add index dashboard_permission" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.541666015Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=978.108µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.544929292Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.545440286Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=510.874µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.552140753Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.553835677Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=1.743234ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.558328106Z level=info msg="Executing migration" id="create tag table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.559533315Z level=info msg="Migration successfully executed" id="create tag table" duration=1.204519ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.563422499Z level=info msg="Executing migration" id="add index tag.key_value" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.565054872Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.631963ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.570351707Z level=info msg="Executing migration" id="create login attempt table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.571442767Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.09201ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.575174448Z level=info msg="Executing migration" id="add index login_attempt.username" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.577221096Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=2.045958ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.582089836Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.583484429Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.394103ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.587882595Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.605794047Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.909812ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.610482396Z level=info msg="Executing migration" id="create login_attempt v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.611580696Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.09827ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.615029045Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.615867492Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=837.157µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.619650295Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.620016628Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=365.763µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.623019673Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.623655378Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=634.845µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.626955506Z level=info msg="Executing migration" id="create user auth table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.628044376Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.08832ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.632606544Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.633447192Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=840.528µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.636571338Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.636588558Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=17.79µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.639789505Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.644474875Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.68478ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.648400788Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.65217389Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.772062ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.656681208Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.663160293Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.477685ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.666857725Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.672117908Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.259573ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.675534958Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.676728228Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.19253ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.683726668Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.689729278Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.044291ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.693451549Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.700175486Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=6.720627ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.703884588Z level=info msg="Executing migration" id="create server_lock table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.704567674Z level=info msg="Migration successfully executed" id="create server_lock table" duration=683.116µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.708961701Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.709976599Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.014438ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.71358806Z level=info msg="Executing migration" id="create user auth token table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.714869451Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.280731ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.719426679Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.721469207Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.041548ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.725567232Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.727304996Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.737284ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.731062688Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.732264938Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.19111ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.736723677Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.744373411Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.649004ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.748267574Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.749372383Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.104689ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.753185815Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.761231793Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.037368ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.772153776Z level=info msg="Executing migration" id="create cache_data table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.773202845Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.048919ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.777037767Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.778044676Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.010239ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.801768177Z level=info msg="Executing migration" id="create short_url table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.803446381Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.676944ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.807666777Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.809199789Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.532832ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.813676117Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.813699728Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=24.471µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.816435981Z level=info msg="Executing migration" id="delete alert_definition table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.816500081Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=64.52µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.822859595Z level=info msg="Executing migration" id="recreate alert_definition table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.823477671Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=618.056µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.828101659Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.829086438Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=984.449µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.833350994Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.835351482Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.019278ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.840411295Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.840439065Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=28.82µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.843220258Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.844456478Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.23188ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.848847836Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.849824274Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=980.728µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.85527533Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.85645619Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.18032ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.860665846Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.861674684Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.009098ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.865714859Z level=info msg="Executing migration" id="Add column paused in alert_definition" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.870788131Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.071792ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.875114248Z level=info msg="Executing migration" id="drop alert_definition table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.876121136Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.007908ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.882869684Z level=info msg="Executing migration" id="delete alert_definition_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.882982275Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=113.641µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.887658654Z level=info msg="Executing migration" id="recreate alert_definition_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.889262078Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.598604ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.899302643Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.900638474Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.335961ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.909754851Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.911311834Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.557053ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.915801332Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.915823462Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=24.08µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.919933347Z level=info msg="Executing migration" id="drop alert_definition_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.920933335Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=999.508µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.926984276Z level=info msg="Executing migration" id="create alert_instance table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.929111405Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=2.128789ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.936272286Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.937546486Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.27399ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.945456314Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.946819176Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.363442ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.952320512Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.957180643Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.859721ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.96157624Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.962549888Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=973.058µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.966903445Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.967830643Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=927.568µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.973262549Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:03.999153498Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.890519ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.004742415Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.031338548Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.596143ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.037978833Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.039155943Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.1759ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.043959453Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.045034302Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.074559ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.04835478Z level=info msg="Executing migration" id="add current_reason column related to current_state" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.054621472Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.262682ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.059500603Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.066045627Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.543634ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.070808367Z level=info msg="Executing migration" id="create alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.071812875Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.004188ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.076116992Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.077442842Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.324951ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.083090869Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.084435181Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.335852ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.089086069Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.09027538Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.188681ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.094956668Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.094974398Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=18.59µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.09999997Z level=info msg="Executing migration" id="add column for to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.109219868Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.219388ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.113194611Z level=info msg="Executing migration" id="add column annotations to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.117692859Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.497968ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.121870103Z level=info msg="Executing migration" id="add column labels to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.128483178Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.612865ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.132716184Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.133650741Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=936.357µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.138907535Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.139974304Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.066089ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.144359621Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.150835795Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.475154ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.156418151Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.162897826Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.478355ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.166276334Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.167351723Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.074979ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.188051706Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.195992761Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.941345ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.200670861Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.206902713Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.211831ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.211105948Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.211160679Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=23.07µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.215203302Z level=info msg="Executing migration" id="create alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.216282721Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.080129ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.221412084Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.222462723Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.050079ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.22684458Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.228478713Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.633013ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.233094332Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.233120232Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=27.06µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.237424658Z level=info msg="Executing migration" id="add column for to alert_rule_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.243893502Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.467174ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.248831343Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.255802272Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.969159ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.260133297Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.267146376Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.012709ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.271410471Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.277799885Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.388304ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.28315971Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.289442312Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.282232ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.293558676Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.293579706Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=22.32µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.297752222Z level=info msg="Executing migration" id=create_alert_configuration_table 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.298553448Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=801.096µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.303995374Z level=info msg="Executing migration" id="Add column default in alert_configuration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.312031311Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.035968ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.316336466Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.316361436Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=25.9µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.320490321Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.326878204Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.382483ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.330560526Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.331520173Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=959.337µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.336674207Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.34316436Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.488794ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.346383037Z level=info msg="Executing migration" id=create_ngalert_configuration_table 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.347044622Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=661.865µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.351167897Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.351952904Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=784.357µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.357000516Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.36345988Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.458634ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.366687017Z level=info msg="Executing migration" id="create provenance_type table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.367472094Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=784.697µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.3706379Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.371667948Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.009658ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.376995213Z level=info msg="Executing migration" id="create alert_image table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.37781257Z level=info msg="Migration successfully executed" id="create alert_image table" duration=816.687µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.380966897Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.381925944Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=958.697µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.387105758Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.387125428Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=20.27µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.392847746Z level=info msg="Executing migration" id=create_alert_configuration_history_table 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.393794153Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=946.017µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.398766555Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.400228317Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.461022ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.405022047Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.40540495Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.411179218Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.411589831Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=409.993µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.416360731Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.417389711Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.02877ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.423112378Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.429987355Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.874377ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.439189632Z level=info msg="Executing migration" id="create library_element table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.441232919Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=2.043387ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.545229727Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.546558819Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.329512ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.670858816Z level=info msg="Executing migration" id="create library_element_connection table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.671937405Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.079649ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.677253509Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.678104587Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=850.978µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.682365842Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.683108848Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=742.766µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.690139167Z level=info msg="Executing migration" id="increase max description length to 2048" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.690254928Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=116.831µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.695340061Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.695386491Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=47.72µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.700362843Z level=info msg="Executing migration" id="add library_element folder uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.706186411Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=5.823018ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.710324505Z level=info msg="Executing migration" id="populate library_element folder_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.710750959Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=425.944µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.713851914Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.714781883Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=929.789µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.720499271Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.720782193Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=282.852µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.725501802Z level=info msg="Executing migration" id="create data_keys table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.726768693Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.267171ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.730910537Z level=info msg="Executing migration" id="create secrets table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.731784765Z level=info msg="Migration successfully executed" id="create secrets table" duration=874.198µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.737294571Z level=info msg="Executing migration" id="rename data_keys name column to id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.772456264Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=35.163283ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.776479918Z level=info msg="Executing migration" id="add name column into data_keys" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.787192468Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.71137ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.790377424Z level=info msg="Executing migration" id="copy data_keys id column values into name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.790481285Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=103.901µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.794236066Z level=info msg="Executing migration" id="rename data_keys name column to label" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.834457501Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=40.215345ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.840733634Z level=info msg="Executing migration" id="rename data_keys id column back to name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.880209784Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=39.48248ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.886011562Z level=info msg="Executing migration" id="create kv_store table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.886845349Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=837.117µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.892591477Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.893371913Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=780.196µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.897806251Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.898027192Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=221.211µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.90251348Z level=info msg="Executing migration" id="create permission table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.903500848Z level=info msg="Migration successfully executed" id="create permission table" duration=989.438µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.908182868Z level=info msg="Executing migration" id="add unique index permission.role_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.909105105Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=921.437µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.916012813Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.917067211Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.054738ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.921561278Z level=info msg="Executing migration" id="create role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.922240425Z level=info msg="Migration successfully executed" id="create role table" duration=679.157µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.925140509Z level=info msg="Executing migration" id="add column display_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.931515812Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.407403ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.93487841Z level=info msg="Executing migration" id="add column group_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.942248212Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.369322ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.947356014Z level=info msg="Executing migration" id="add index role.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.948402073Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.045839ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.951405698Z level=info msg="Executing migration" id="add unique index role_org_id_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.952711889Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.308381ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.955945936Z level=info msg="Executing migration" id="add index role_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.956711312Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=765.166µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.962244268Z level=info msg="Executing migration" id="create team role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.963270937Z level=info msg="Migration successfully executed" id="create team role table" duration=1.026618ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.96958546Z level=info msg="Executing migration" id="add index team_role.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.97078646Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.1996ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.976655018Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.977886059Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.230071ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.980744363Z level=info msg="Executing migration" id="add index team_role.team_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.981974253Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.23035ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.985820125Z level=info msg="Executing migration" id="create user role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.987313748Z level=info msg="Migration successfully executed" id="create user role table" duration=1.497253ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.990852727Z level=info msg="Executing migration" id="add index user_role.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.991971687Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.11839ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.996726236Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:04.998124748Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.396912ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.001383835Z level=info msg="Executing migration" id="add index user_role.user_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.00316668Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.781905ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.007429675Z level=info msg="Executing migration" id="create builtin role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.008282782Z level=info msg="Migration successfully executed" id="create builtin role table" duration=852.987µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.014046581Z level=info msg="Executing migration" id="add index builtin_role.role_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.015415072Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.366901ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.019700098Z level=info msg="Executing migration" id="add index builtin_role.name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.021432212Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.731464ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.024663199Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.032692986Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.028557ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.037563467Z level=info msg="Executing migration" id="add index builtin_role.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.038633085Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.068898ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.041296408Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.042394887Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.098019ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.045590524Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.046623702Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.033598ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.052069867Z level=info msg="Executing migration" id="add unique index role.uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.053077075Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.006988ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.05720629Z level=info msg="Executing migration" id="create seed assignment table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.057939736Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=735.326µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.06196812Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.063011349Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.042529ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.069381652Z level=info msg="Executing migration" id="add column hidden to role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.080089381Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=10.707229ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.099522933Z level=info msg="Executing migration" id="permission kind migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.109483967Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.962164ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.112611273Z level=info msg="Executing migration" id="permission attribute migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.120816651Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.182318ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.125970244Z level=info msg="Executing migration" id="permission identifier migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.135250892Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.279299ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.139428106Z level=info msg="Executing migration" id="add permission identifier index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.140336334Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=907.918µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.143117977Z level=info msg="Executing migration" id="add permission action scope role_id index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.143983054Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=864.697µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.149899714Z level=info msg="Executing migration" id="remove permission role_id action scope index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.151030443Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.130139ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.15415935Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.162340157Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.180337ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.165886367Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.167256238Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.369121ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.180290887Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.181760609Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.471562ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.185285229Z level=info msg="Executing migration" id="create query_history table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.186836602Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.546783ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.190767755Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.192749061Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.974566ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.199459547Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.199543348Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=83.401µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.203752643Z level=info msg="Executing migration" id="create query_history_details table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.205561558Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.807615ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.209729502Z level=info msg="Executing migration" id="rbac disabled migrator" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.209813463Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=84.151µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.216737631Z level=info msg="Executing migration" id="teams permissions migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.217360966Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=622.475µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.220837435Z level=info msg="Executing migration" id="dashboard permissions" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.221681643Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=847.638µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.226931867Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.227817194Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=884.767µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.233522802Z level=info msg="Executing migration" id="drop managed folder create actions" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.233824214Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=299.492µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.24172299Z level=info msg="Executing migration" id="alerting notification permissions" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.242363265Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=643.935µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.249808788Z level=info msg="Executing migration" id="create query_history_star table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.250747115Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=937.857µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.280992978Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.282855953Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.861615ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.288873634Z level=info msg="Executing migration" id="add column org_id in query_history_star" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.298927277Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.042283ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.301911972Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.301927042Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=15.52µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.30527658Z level=info msg="Executing migration" id="create correlation table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.306010376Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=733.226µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.30882616Z level=info msg="Executing migration" id="add index correlations.uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.309713547Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=885.767µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.315310654Z level=info msg="Executing migration" id="add index correlations.source_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.31717295Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.861276ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.320642528Z level=info msg="Executing migration" id="add correlation config column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.330023436Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.382708ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.333695618Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.334862377Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.166639ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.340399713Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.341497822Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.100549ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.344506347Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.367704621Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.196314ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.665880959Z level=info msg="Executing migration" id="create correlation v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.669308347Z level=info msg="Migration successfully executed" id="create correlation v2" duration=3.427148ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.674094987Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.676391337Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.29879ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.688451237Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.690384573Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.933356ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.70066553Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.701902539Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.236909ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.705941524Z level=info msg="Executing migration" id="copy correlation v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.706241196Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=301.613µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.71033484Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.711391328Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.055338ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.7594748Z level=info msg="Executing migration" id="add provisioning column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.771344699Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.869329ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.775626094Z level=info msg="Executing migration" id="add type column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.78455282Z level=info msg="Migration successfully executed" id="add type column" duration=8.923496ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.795879784Z level=info msg="Executing migration" id="create entity_events table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.797355506Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.477722ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.801922354Z level=info msg="Executing migration" id="create dashboard public config v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.803750419Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.827725ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.808486239Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.809020523Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.811964858Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.812400742Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.814926812Z level=info msg="Executing migration" id="Drop old dashboard public config table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.81575456Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=827.188µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.821901981Z level=info msg="Executing migration" id="recreate dashboard public config v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.82309126Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.189659ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.827308246Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.828499966Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.2256ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.8325922Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.83386854Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.27584ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.849471081Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.850824252Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.355271ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.854940427Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.856225287Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.28612ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.860618744Z level=info msg="Executing migration" id="Drop public config table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.861483691Z level=info msg="Migration successfully executed" id="Drop public config table" duration=862.177µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.864279264Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.865486784Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.20701ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.869523519Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.870728668Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.204899ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.875823421Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.876951661Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.12856ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.879781315Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.880837223Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.055778ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.883839008Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.909416461Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.571143ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.915005918Z level=info msg="Executing migration" id="add annotations_enabled column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.922207779Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.200621ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.933965487Z level=info msg="Executing migration" id="add time_selection_enabled column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.946838104Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=12.857177ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.950461875Z level=info msg="Executing migration" id="delete orphaned public dashboards" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.950766997Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=306.202µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.955368136Z level=info msg="Executing migration" id="add share column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.96191819Z level=info msg="Migration successfully executed" id="add share column" duration=6.549934ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.966397717Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.966551939Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=152.152µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.969055249Z level=info msg="Executing migration" id="create file table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.970033018Z level=info msg="Migration successfully executed" id="create file table" duration=977.239µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.975211801Z level=info msg="Executing migration" id="file table idx: path natural pk" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.976537622Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.325631ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.980853698Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.982113138Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.26112ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.98708711Z level=info msg="Executing migration" id="create file_meta table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.988050308Z level=info msg="Migration successfully executed" id="create file_meta table" duration=963.098µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.992493735Z level=info msg="Executing migration" id="file table idx: path key" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.993911287Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.418362ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.998726797Z level=info msg="Executing migration" id="set path collation in file table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:05.998740957Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=15.12µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.002718471Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.002757621Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=42.58µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.006682579Z level=info msg="Executing migration" id="managed permissions migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.007599578Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=917.089µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.011166995Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.011469359Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=302.404µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.015889236Z level=info msg="Executing migration" id="RBAC action name migrator" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.01719186Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.302454ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.021468674Z level=info msg="Executing migration" id="Add UID column to playlist" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.03065398Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.184056ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.034295839Z level=info msg="Executing migration" id="Update uid column values in playlist" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.03441477Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=118.531µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.037700355Z level=info msg="Executing migration" id="Add index for uid in playlist" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.038515453Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=817.748µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.044750938Z level=info msg="Executing migration" id="update group index for alert rules" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.045166993Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=416.805µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.048277246Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.048492518Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=215.152µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.053672202Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.054180168Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=507.696µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.057968827Z level=info msg="Executing migration" id="add action column to seed_assignment" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.070915733Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.951116ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.076126568Z level=info msg="Executing migration" id="add scope column to seed_assignment" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.085991771Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.858963ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.088643489Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.089446567Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=803.058µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.093438919Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.16981475Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.361621ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.175249017Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.176188177Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=939.16µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.179062147Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.179870566Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=812.669µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.187106302Z level=info msg="Executing migration" id="add primary key to seed_assigment" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.220085878Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=32.957845ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.227313424Z level=info msg="Executing migration" id="add origin column to seed_assignment" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.236800083Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.486019ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.246409063Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.246886519Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=476.996µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.250020912Z level=info msg="Executing migration" id="prevent seeding OnCall access" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.250321905Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=305.793µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.255864633Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.256193687Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=328.334µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.258712103Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.259036446Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=326.163µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.262133379Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.262453922Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=319.783µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.265530024Z level=info msg="Executing migration" id="create folder table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.266650267Z level=info msg="Migration successfully executed" id="create folder table" duration=1.119793ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.271903952Z level=info msg="Executing migration" id="Add index for parent_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.273178405Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.274073ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.276302407Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.277591451Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.287144ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.281783045Z level=info msg="Executing migration" id="Update folder title length" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.281917666Z level=info msg="Migration successfully executed" id="Update folder title length" duration=138.111µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.287151471Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.288344843Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.192782ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.291520158Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.29270421Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.183651ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.295844662Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.297048986Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.203904ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.301432331Z level=info msg="Executing migration" id="Sync dashboard and folder table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.301957526Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=525.045µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.30518102Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.305543165Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=362.125µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.308726258Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.310587127Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.860749ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.314985833Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.316787143Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.80159ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.321105047Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.322337021Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.231604ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.328421864Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.330464695Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.042471ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.337637281Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.338836213Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.198752ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.342899556Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.344797166Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.89718ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.424713774Z level=info msg="Executing migration" id="create anon_device table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.426648415Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.936381ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.486802715Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.488538004Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.739619ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.555587727Z level=info msg="Executing migration" id="add index anon_device.updated_at" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.559170585Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=3.584918ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.625674662Z level=info msg="Executing migration" id="create signing_key table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.627033296Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.368304ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.675769788Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.685096676Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=9.329597ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.68841518Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.689924646Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.508936ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.692948928Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.693350512Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=402.114µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.697501825Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.704529479Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.029684ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.708379389Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.708904856Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=526.087µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.713304831Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.713330272Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=27.541µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.717379065Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.721377566Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=3.997191ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.725261296Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.725277687Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=17.101µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.727742483Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.728657463Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=914.98µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.730630493Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.731446912Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=819.489µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.735422333Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.736652886Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.231213ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.739278594Z level=info msg="Executing migration" id="create sso_setting table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.740424376Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.145713ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.748794054Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.750088347Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.295583ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.752711605Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.752962247Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=251.042µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.754915058Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.755439353Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=521.125µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.758069031Z level=info msg="Executing migration" id="create cloud_migration table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.759193682Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.127911ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.764332666Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.766284317Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.954051ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.772092858Z level=info msg="Executing migration" id="add stack_id column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.784927563Z level=info msg="Migration successfully executed" id="add stack_id column" duration=12.830595ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.789390969Z level=info msg="Executing migration" id="add region_slug column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.799717718Z level=info msg="Migration successfully executed" id="add region_slug column" duration=10.323709ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.804046513Z level=info msg="Executing migration" id="add cluster_slug column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.814294611Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=10.245398ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.81996697Z level=info msg="Executing migration" id="add migration uid column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.827251367Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.285277ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.832016416Z level=info msg="Executing migration" id="Update uid column values for migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.832168719Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=152.653µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.836400043Z level=info msg="Executing migration" id="Add unique index migration_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.837277002Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=876.518µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.841608537Z level=info msg="Executing migration" id="add migration run uid column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.851881165Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.271818ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.859490765Z level=info msg="Executing migration" id="Update uid column values for migration run" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.859679247Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=188.691µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.86664977Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.868068205Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.417374ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.872739214Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.899278242Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=26.536988ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.90482098Z level=info msg="Executing migration" id="create cloud_migration_session v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.905559018Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=737.768µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.909633311Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.911688052Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.053941ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.916708995Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.917047189Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=338.284µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.920853238Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.921746538Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=895.71µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.92579159Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.949772792Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=23.971202ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.955823366Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.956582553Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=758.947µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.960898188Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.962107881Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.208763ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.965401256Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.965732729Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=331.203µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.969198835Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.970079105Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=882.52µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.975207339Z level=info msg="Executing migration" id="add snapshot upload_url column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.986688589Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.48137ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.989935893Z level=info msg="Executing migration" id="add snapshot status column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:06.999332932Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.395959ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.002777056Z level=info msg="Executing migration" id="add snapshot local_directory column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.009931711Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=7.154035ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.015468947Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.02552277Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=10.052023ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.030031716Z level=info msg="Executing migration" id="add snapshot encryption_key column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.04007578Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=10.043604ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.043560509Z level=info msg="Executing migration" id="add snapshot error_string column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.053657133Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.095434ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.058537293Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.059546052Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.005199ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.065607052Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.103036401Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.426379ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.106673742Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.116467563Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.787811ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.122394082Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.130174077Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.777775ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.133516354Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.143658519Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.140925ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.147105407Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.154142025Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=7.035448ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.159407179Z level=info msg="Executing migration" id="increase resource_uid column length" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.159422929Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=16.29µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.162401834Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.162415054Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=13.72µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.165667361Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.178071504Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.403073ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.182344819Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.192290751Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.944882ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.195303406Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.195579958Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=275.922µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.198708764Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.198874215Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=165.111µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.202030753Z level=info msg="Executing migration" id="add record column to alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.214243254Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.212581ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.218185936Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.225838729Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.650423ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.229916734Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.240298549Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=10.380395ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.243645677Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.253931592Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=10.259185ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.2584394Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.259094635Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=655.025µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.26205184Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.275662103Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=13.607223ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.280727305Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.293667361Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=12.943486ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.297160211Z level=info msg="Executing migration" id="delete orphaned service account permissions" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.297370003Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=209.882µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.301963121Z level=info msg="Executing migration" id="adding action set permissions" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.302393214Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=429.993µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.305307988Z level=info msg="Executing migration" id="create user_external_session table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.306566519Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.258181ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.311013005Z level=info msg="Executing migration" id="increase name_id column length to 1024" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.311030575Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=18.64µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.314068621Z level=info msg="Executing migration" id="increase session_id column length to 1024" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.314099062Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=31.431µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.317291228Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.317917643Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=625.745µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.322567152Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.332868026Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=10.297814ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.33569488Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.342746939Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=7.051579ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.345610842Z level=info msg="Executing migration" id="add alert_rule_state table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.346381118Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=769.886µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.354742528Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.355902407Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.160689ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.358953693Z level=info msg="Executing migration" id="add guid column to alert_rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.366219393Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=7.265ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.369032486Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.376114616Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.081449ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.380438061Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.380458491Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.380713213Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.380727583Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=289.542µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.38276363Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.383307954Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=543.314µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.386116198Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.387008106Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=891.768µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.391382522Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.392869324Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.485963ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.396343653Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.39834322Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.999277ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.401786428Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.403137669Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.355091ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.406353437Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.417915572Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.555295ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.422780052Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.434140896Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=11.362354ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.438420482Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.44555347Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=7.132338ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.448728848Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.458596759Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.866981ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.463472959Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.463674051Z level=info msg="Removed 0 datasources:drilldown permissions" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.463691691Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=219.042µs 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.466855367Z level=info msg="Executing migration" id="remove title in folder unique index" 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.467956056Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.100409ms 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.472373973Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.599125275s 15:01:13 grafana | logger=migrator t=2025-06-16T14:55:07.473730605Z level=info msg="Unlocking database" 15:01:13 grafana | logger=sqlstore t=2025-06-16T14:55:07.492160217Z level=info msg="Created default admin" user=admin 15:01:13 grafana | logger=sqlstore t=2025-06-16T14:55:07.492394869Z level=info msg="Created default organization" 15:01:13 grafana | logger=secrets t=2025-06-16T14:55:07.497212089Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 15:01:13 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T14:55:07.576776168Z level=info msg="Restored cache from database" duration=553.544µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.586205257Z level=info msg="Locking database" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.586224097Z level=info msg="Starting DB migrations" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.593753029Z level=info msg="Executing migration" id="create resource_migration_log table" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.594625287Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=871.748µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.599073493Z level=info msg="Executing migration" id="Initialize resource tables" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.599088083Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=15.13µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.603011486Z level=info msg="Executing migration" id="drop table resource" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.603072897Z level=info msg="Migration successfully executed" id="drop table resource" duration=61.621µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.605924871Z level=info msg="Executing migration" id="create table resource" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.607734355Z level=info msg="Migration successfully executed" id="create table resource" duration=1.803604ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.611246975Z level=info msg="Executing migration" id="create table resource, index: 0" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.612651686Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.405311ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.617597447Z level=info msg="Executing migration" id="drop table resource_history" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.617708488Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=110.271µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.620785853Z level=info msg="Executing migration" id="create table resource_history" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.621886962Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.101519ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.624871528Z level=info msg="Executing migration" id="create table resource_history, index: 0" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.626196748Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.32477ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.63005288Z level=info msg="Executing migration" id="create table resource_history, index: 1" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.631290251Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.236601ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.634274166Z level=info msg="Executing migration" id="drop table resource_version" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.634398417Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=119.941µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.637684473Z level=info msg="Executing migration" id="create table resource_version" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.638661791Z level=info msg="Migration successfully executed" id="create table resource_version" duration=976.548µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.64324092Z level=info msg="Executing migration" id="create table resource_version, index: 0" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.64446273Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.221619ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.648890996Z level=info msg="Executing migration" id="drop table resource_blob" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.649024237Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=132.541µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.652219684Z level=info msg="Executing migration" id="create table resource_blob" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.653521515Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.297301ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.65658356Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.65784725Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.26129ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.74096528Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.743002257Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.033397ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.746740538Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.759496004Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=12.751986ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.76389783Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.772719253Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=8.819903ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.776212782Z level=info msg="Executing migration" id="Add index to resource_history for polling" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.77722555Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.012698ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.780397007Z level=info msg="Executing migration" id="Add index to resource for loading" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.781304684Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=907.307µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.785036005Z level=info msg="Executing migration" id="Add column folder in resource_history" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.797789181Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.750045ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.803168945Z level=info msg="Executing migration" id="Add column folder in resource" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.810769678Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=7.600573ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.814387338Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 15:01:13 grafana | logger=deletion-marker-migrator t=2025-06-16T14:55:07.814455039Z level=info msg="finding any deletion markers" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.815132784Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=745.736µs 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.818477172Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.820325177Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.847495ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.824442722Z level=info msg="Executing migration" id="Add generation to resource history" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.83625824Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.818358ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.84113659Z level=info msg="Executing migration" id="Add generation index to resource history" 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.842595342Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.458872ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.845437715Z level=info msg="migrations completed" performed=26 skipped=0 duration=251.725036ms 15:01:13 grafana | logger=resource-migrator t=2025-06-16T14:55:07.846140661Z level=info msg="Unlocking database" 15:01:13 grafana | t=2025-06-16T14:55:07.846472214Z level=info caller=logger.go:214 time=2025-06-16T14:55:07.846449054Z msg="Using channel notifier" logger=sql-resource-server 15:01:13 grafana | logger=plugin.store t=2025-06-16T14:55:07.858438984Z level=info msg="Loading plugins..." 15:01:13 grafana | logger=plugins.registration t=2025-06-16T14:55:07.896761951Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 15:01:13 grafana | logger=plugins.initialization t=2025-06-16T14:55:07.896789562Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 15:01:13 grafana | logger=plugin.store t=2025-06-16T14:55:07.896843912Z level=info msg="Plugins loaded" count=53 duration=38.406448ms 15:01:13 grafana | logger=query_data t=2025-06-16T14:55:07.902098885Z level=info msg="Query Service initialization" 15:01:13 grafana | logger=live.push_http t=2025-06-16T14:55:07.915545616Z level=info msg="Live Push Gateway initialization" 15:01:13 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T14:55:07.930124717Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 15:01:13 grafana | logger=ngalert t=2025-06-16T14:55:07.938264445Z level=info msg="Using simple database alert instance store" 15:01:13 grafana | logger=ngalert.state.manager.persist t=2025-06-16T14:55:07.938289295Z level=info msg="Using sync state persister" 15:01:13 grafana | logger=infra.usagestats.collector t=2025-06-16T14:55:07.941124128Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 15:01:13 grafana | logger=grafanaStorageLogger t=2025-06-16T14:55:07.941594523Z level=info msg="Storage starting" 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:07.941713204Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 15:01:13 grafana | logger=ngalert.state.manager t=2025-06-16T14:55:07.942229848Z level=info msg="Warming state cache for startup" 15:01:13 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T14:55:07.942843243Z level=info msg="Starting MultiOrg Alertmanager" 15:01:13 grafana | logger=http.server t=2025-06-16T14:55:07.947796994Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 15:01:13 grafana | logger=provisioning.datasources t=2025-06-16T14:55:08.045341033Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 15:01:13 grafana | logger=ngalert.state.manager t=2025-06-16T14:55:08.056295154Z level=info msg="State cache has been initialized" states=0 duration=114.065496ms 15:01:13 grafana | logger=ngalert.scheduler t=2025-06-16T14:55:08.056357844Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 15:01:13 grafana | logger=ticker t=2025-06-16T14:55:08.056497796Z level=info msg=starting first_tick=2025-06-16T14:55:10Z 15:01:13 grafana | logger=sqlstore.transactions t=2025-06-16T14:55:08.064147068Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 15:01:13 grafana | logger=provisioning.alerting t=2025-06-16T14:55:08.08724572Z level=info msg="starting to provision alerting" 15:01:13 grafana | logger=provisioning.alerting t=2025-06-16T14:55:08.08727634Z level=info msg="finished to provision alerting" 15:01:13 grafana | logger=provisioning.dashboard t=2025-06-16T14:55:08.089414218Z level=info msg="starting to provision dashboards" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.165923092Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.168326402Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.168822596Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.169216409Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.169598862Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.170244077Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.171403737Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.173800946Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 15:01:13 grafana | logger=grafana-apiserver t=2025-06-16T14:55:08.17538231Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 15:01:13 grafana | logger=app-registry t=2025-06-16T14:55:08.236297515Z level=info msg="app registry initialized" 15:01:13 grafana | logger=provisioning.dashboard t=2025-06-16T14:55:09.290247687Z level=info msg="finished to provision dashboards" 15:01:13 grafana | logger=grafana.update.checker t=2025-06-16T14:55:11.541366334Z level=info msg="Update check succeeded" duration=3.599935971s 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:17.943096971Z level=error msg="Failed to install plugin" pluginId=grafana-metricsdrilldown-app version= error="Get \"https://grafana.com/api/plugins/grafana-metricsdrilldown-app/versions\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:17.943263592Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 15:01:13 grafana | logger=plugins.update.checker t=2025-06-16T14:55:21.926914684Z level=info msg="Update check succeeded" duration=13.985127239s 15:01:13 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T14:55:23.51018516Z level=info msg="Patterns update finished" duration=15.567990692s 15:01:13 grafana | logger=plugin.installer t=2025-06-16T14:55:24.822158146Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 15:01:13 grafana | logger=installer.fs t=2025-06-16T14:55:24.949201972Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 15:01:13 grafana | logger=plugins.registration t=2025-06-16T14:55:25.000871286Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:25.000989517Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=7.057710095s 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:25.001055128Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 15:01:13 grafana | logger=plugin.installer t=2025-06-16T14:55:25.252261124Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 15:01:13 grafana | logger=installer.fs t=2025-06-16T14:55:25.302962131Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 15:01:13 grafana | logger=plugins.registration t=2025-06-16T14:55:25.318536349Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:25.318559439Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=317.436351ms 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:25.318582599Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 15:01:13 grafana | logger=plugin.installer t=2025-06-16T14:55:25.555989831Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 15:01:13 grafana | logger=installer.fs t=2025-06-16T14:55:25.62515807Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 15:01:13 grafana | logger=plugins.registration t=2025-06-16T14:55:25.662320015Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 15:01:13 grafana | logger=plugin.backgroundinstaller t=2025-06-16T14:55:25.662370916Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=343.781616ms 15:01:13 grafana | logger=infra.usagestats t=2025-06-16T14:56:39.950232715Z level=info msg="Usage stats are ready to report" 15:01:13 kafka | ===> User 15:01:13 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 15:01:13 kafka | ===> Configuring ... 15:01:13 kafka | Running in Zookeeper mode... 15:01:13 kafka | ===> Running preflight checks ... 15:01:13 kafka | ===> Check if /var/lib/kafka/data is writable ... 15:01:13 kafka | ===> Check if Zookeeper is healthy ... 15:01:13 kafka | [2025-06-16 14:55:06,134] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,135] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,136] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,138] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,141] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 15:01:13 kafka | [2025-06-16 14:55:06,145] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 15:01:13 kafka | [2025-06-16 14:55:06,151] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:06,170] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:06,170] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:06,178] INFO Socket connection established, initiating session, client: /172.17.0.6:41224, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:06,205] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000233930000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:06,325] INFO Session: 0x100000233930000 closed (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:06,325] INFO EventThread shut down for session: 0x100000233930000 (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | Using log4j config /etc/kafka/log4j.properties 15:01:13 kafka | ===> Launching ... 15:01:13 kafka | ===> Launching kafka ... 15:01:13 kafka | [2025-06-16 14:55:07,026] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 15:01:13 kafka | [2025-06-16 14:55:07,304] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 15:01:13 kafka | [2025-06-16 14:55:07,385] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 15:01:13 kafka | [2025-06-16 14:55:07,387] INFO starting (kafka.server.KafkaServer) 15:01:13 kafka | [2025-06-16 14:55:07,387] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 15:01:13 kafka | [2025-06-16 14:55:07,401] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,405] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,406] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,406] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,406] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,406] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,406] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,406] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,407] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5d8bafa9 (org.apache.zookeeper.ZooKeeper) 15:01:13 kafka | [2025-06-16 14:55:07,411] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 15:01:13 kafka | [2025-06-16 14:55:07,416] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:07,418] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 15:01:13 kafka | [2025-06-16 14:55:07,422] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:07,429] INFO Socket connection established, initiating session, client: /172.17.0.6:41226, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:07,437] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000233930001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 15:01:13 kafka | [2025-06-16 14:55:07,445] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 15:01:13 kafka | [2025-06-16 14:55:07,746] INFO Cluster ID = Wfl8AkZLQj6X2gSeQGSSIQ (kafka.server.KafkaServer) 15:01:13 kafka | [2025-06-16 14:55:07,751] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 15:01:13 kafka | [2025-06-16 14:55:07,799] INFO KafkaConfig values: 15:01:13 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 15:01:13 kafka | alter.config.policy.class.name = null 15:01:13 kafka | alter.log.dirs.replication.quota.window.num = 11 15:01:13 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 15:01:13 kafka | authorizer.class.name = 15:01:13 kafka | auto.create.topics.enable = true 15:01:13 kafka | auto.include.jmx.reporter = true 15:01:13 kafka | auto.leader.rebalance.enable = true 15:01:13 kafka | background.threads = 10 15:01:13 kafka | broker.heartbeat.interval.ms = 2000 15:01:13 kafka | broker.id = 1 15:01:13 kafka | broker.id.generation.enable = true 15:01:13 kafka | broker.rack = null 15:01:13 kafka | broker.session.timeout.ms = 9000 15:01:13 kafka | client.quota.callback.class = null 15:01:13 kafka | compression.type = producer 15:01:13 kafka | connection.failed.authentication.delay.ms = 100 15:01:13 kafka | connections.max.idle.ms = 600000 15:01:13 kafka | connections.max.reauth.ms = 0 15:01:13 kafka | control.plane.listener.name = null 15:01:13 kafka | controlled.shutdown.enable = true 15:01:13 kafka | controlled.shutdown.max.retries = 3 15:01:13 kafka | controlled.shutdown.retry.backoff.ms = 5000 15:01:13 kafka | controller.listener.names = null 15:01:13 kafka | controller.quorum.append.linger.ms = 25 15:01:13 kafka | controller.quorum.election.backoff.max.ms = 1000 15:01:13 kafka | controller.quorum.election.timeout.ms = 1000 15:01:13 kafka | controller.quorum.fetch.timeout.ms = 2000 15:01:13 kafka | controller.quorum.request.timeout.ms = 2000 15:01:13 kafka | controller.quorum.retry.backoff.ms = 20 15:01:13 kafka | controller.quorum.voters = [] 15:01:13 kafka | controller.quota.window.num = 11 15:01:13 kafka | controller.quota.window.size.seconds = 1 15:01:13 kafka | controller.socket.timeout.ms = 30000 15:01:13 kafka | create.topic.policy.class.name = null 15:01:13 kafka | default.replication.factor = 1 15:01:13 kafka | delegation.token.expiry.check.interval.ms = 3600000 15:01:13 kafka | delegation.token.expiry.time.ms = 86400000 15:01:13 kafka | delegation.token.master.key = null 15:01:13 kafka | delegation.token.max.lifetime.ms = 604800000 15:01:13 kafka | delegation.token.secret.key = null 15:01:13 kafka | delete.records.purgatory.purge.interval.requests = 1 15:01:13 kafka | delete.topic.enable = true 15:01:13 kafka | early.start.listeners = null 15:01:13 kafka | fetch.max.bytes = 57671680 15:01:13 kafka | fetch.purgatory.purge.interval.requests = 1000 15:01:13 kafka | group.initial.rebalance.delay.ms = 3000 15:01:13 kafka | group.max.session.timeout.ms = 1800000 15:01:13 kafka | group.max.size = 2147483647 15:01:13 kafka | group.min.session.timeout.ms = 6000 15:01:13 kafka | initial.broker.registration.timeout.ms = 60000 15:01:13 kafka | inter.broker.listener.name = PLAINTEXT 15:01:13 kafka | inter.broker.protocol.version = 3.4-IV0 15:01:13 kafka | kafka.metrics.polling.interval.secs = 10 15:01:13 kafka | kafka.metrics.reporters = [] 15:01:13 kafka | leader.imbalance.check.interval.seconds = 300 15:01:13 kafka | leader.imbalance.per.broker.percentage = 10 15:01:13 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 15:01:13 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 15:01:13 kafka | log.cleaner.backoff.ms = 15000 15:01:13 kafka | log.cleaner.dedupe.buffer.size = 134217728 15:01:13 kafka | log.cleaner.delete.retention.ms = 86400000 15:01:13 kafka | log.cleaner.enable = true 15:01:13 kafka | log.cleaner.io.buffer.load.factor = 0.9 15:01:13 kafka | log.cleaner.io.buffer.size = 524288 15:01:13 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 15:01:13 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 15:01:13 kafka | log.cleaner.min.cleanable.ratio = 0.5 15:01:13 kafka | log.cleaner.min.compaction.lag.ms = 0 15:01:13 kafka | log.cleaner.threads = 1 15:01:13 kafka | log.cleanup.policy = [delete] 15:01:13 kafka | log.dir = /tmp/kafka-logs 15:01:13 kafka | log.dirs = /var/lib/kafka/data 15:01:13 kafka | log.flush.interval.messages = 9223372036854775807 15:01:13 kafka | log.flush.interval.ms = null 15:01:13 kafka | log.flush.offset.checkpoint.interval.ms = 60000 15:01:13 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 15:01:13 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 15:01:13 kafka | log.index.interval.bytes = 4096 15:01:13 kafka | log.index.size.max.bytes = 10485760 15:01:13 kafka | log.message.downconversion.enable = true 15:01:13 kafka | log.message.format.version = 3.0-IV1 15:01:13 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 15:01:13 kafka | log.message.timestamp.type = CreateTime 15:01:13 kafka | log.preallocate = false 15:01:13 kafka | log.retention.bytes = -1 15:01:13 kafka | log.retention.check.interval.ms = 300000 15:01:13 kafka | log.retention.hours = 168 15:01:13 kafka | log.retention.minutes = null 15:01:13 kafka | log.retention.ms = null 15:01:13 kafka | log.roll.hours = 168 15:01:13 kafka | log.roll.jitter.hours = 0 15:01:13 kafka | log.roll.jitter.ms = null 15:01:13 kafka | log.roll.ms = null 15:01:13 kafka | log.segment.bytes = 1073741824 15:01:13 kafka | log.segment.delete.delay.ms = 60000 15:01:13 kafka | max.connection.creation.rate = 2147483647 15:01:13 kafka | max.connections = 2147483647 15:01:13 kafka | max.connections.per.ip = 2147483647 15:01:13 kafka | max.connections.per.ip.overrides = 15:01:13 kafka | max.incremental.fetch.session.cache.slots = 1000 15:01:13 kafka | message.max.bytes = 1048588 15:01:13 kafka | metadata.log.dir = null 15:01:13 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 15:01:13 kafka | metadata.log.max.snapshot.interval.ms = 3600000 15:01:13 kafka | metadata.log.segment.bytes = 1073741824 15:01:13 kafka | metadata.log.segment.min.bytes = 8388608 15:01:13 kafka | metadata.log.segment.ms = 604800000 15:01:13 kafka | metadata.max.idle.interval.ms = 500 15:01:13 kafka | metadata.max.retention.bytes = 104857600 15:01:13 kafka | metadata.max.retention.ms = 604800000 15:01:13 kafka | metric.reporters = [] 15:01:13 kafka | metrics.num.samples = 2 15:01:13 kafka | metrics.recording.level = INFO 15:01:13 kafka | metrics.sample.window.ms = 30000 15:01:13 kafka | min.insync.replicas = 1 15:01:13 kafka | node.id = 1 15:01:13 kafka | num.io.threads = 8 15:01:13 kafka | num.network.threads = 3 15:01:13 kafka | num.partitions = 1 15:01:13 kafka | num.recovery.threads.per.data.dir = 1 15:01:13 kafka | num.replica.alter.log.dirs.threads = null 15:01:13 kafka | num.replica.fetchers = 1 15:01:13 kafka | offset.metadata.max.bytes = 4096 15:01:13 kafka | offsets.commit.required.acks = -1 15:01:13 kafka | offsets.commit.timeout.ms = 5000 15:01:13 kafka | offsets.load.buffer.size = 5242880 15:01:13 kafka | offsets.retention.check.interval.ms = 600000 15:01:13 kafka | offsets.retention.minutes = 10080 15:01:13 kafka | offsets.topic.compression.codec = 0 15:01:13 kafka | offsets.topic.num.partitions = 50 15:01:13 kafka | offsets.topic.replication.factor = 1 15:01:13 kafka | offsets.topic.segment.bytes = 104857600 15:01:13 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 15:01:13 kafka | password.encoder.iterations = 4096 15:01:13 kafka | password.encoder.key.length = 128 15:01:13 kafka | password.encoder.keyfactory.algorithm = null 15:01:13 kafka | password.encoder.old.secret = null 15:01:13 kafka | password.encoder.secret = null 15:01:13 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 15:01:13 kafka | process.roles = [] 15:01:13 kafka | producer.id.expiration.check.interval.ms = 600000 15:01:13 kafka | producer.id.expiration.ms = 86400000 15:01:13 kafka | producer.purgatory.purge.interval.requests = 1000 15:01:13 kafka | queued.max.request.bytes = -1 15:01:13 kafka | queued.max.requests = 500 15:01:13 kafka | quota.window.num = 11 15:01:13 kafka | quota.window.size.seconds = 1 15:01:13 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 15:01:13 kafka | remote.log.manager.task.interval.ms = 30000 15:01:13 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 15:01:13 kafka | remote.log.manager.task.retry.backoff.ms = 500 15:01:13 kafka | remote.log.manager.task.retry.jitter = 0.2 15:01:13 kafka | remote.log.manager.thread.pool.size = 10 15:01:13 kafka | remote.log.metadata.manager.class.name = null 15:01:13 kafka | remote.log.metadata.manager.class.path = null 15:01:13 kafka | remote.log.metadata.manager.impl.prefix = null 15:01:13 kafka | remote.log.metadata.manager.listener.name = null 15:01:13 kafka | remote.log.reader.max.pending.tasks = 100 15:01:13 kafka | remote.log.reader.threads = 10 15:01:13 kafka | remote.log.storage.manager.class.name = null 15:01:13 kafka | remote.log.storage.manager.class.path = null 15:01:13 kafka | remote.log.storage.manager.impl.prefix = null 15:01:13 kafka | remote.log.storage.system.enable = false 15:01:13 kafka | replica.fetch.backoff.ms = 1000 15:01:13 kafka | replica.fetch.max.bytes = 1048576 15:01:13 kafka | replica.fetch.min.bytes = 1 15:01:13 kafka | replica.fetch.response.max.bytes = 10485760 15:01:13 kafka | replica.fetch.wait.max.ms = 500 15:01:13 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 15:01:13 kafka | replica.lag.time.max.ms = 30000 15:01:13 kafka | replica.selector.class = null 15:01:13 kafka | replica.socket.receive.buffer.bytes = 65536 15:01:13 kafka | replica.socket.timeout.ms = 30000 15:01:13 kafka | replication.quota.window.num = 11 15:01:13 kafka | replication.quota.window.size.seconds = 1 15:01:13 kafka | request.timeout.ms = 30000 15:01:13 kafka | reserved.broker.max.id = 1000 15:01:13 kafka | sasl.client.callback.handler.class = null 15:01:13 kafka | sasl.enabled.mechanisms = [GSSAPI] 15:01:13 kafka | sasl.jaas.config = null 15:01:13 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:13 kafka | sasl.kerberos.min.time.before.relogin = 60000 15:01:13 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 15:01:13 kafka | sasl.kerberos.service.name = null 15:01:13 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:13 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:13 kafka | sasl.login.callback.handler.class = null 15:01:13 kafka | sasl.login.class = null 15:01:13 kafka | sasl.login.connect.timeout.ms = null 15:01:13 kafka | sasl.login.read.timeout.ms = null 15:01:13 kafka | sasl.login.refresh.buffer.seconds = 300 15:01:13 kafka | sasl.login.refresh.min.period.seconds = 60 15:01:13 kafka | sasl.login.refresh.window.factor = 0.8 15:01:13 kafka | sasl.login.refresh.window.jitter = 0.05 15:01:13 kafka | sasl.login.retry.backoff.max.ms = 10000 15:01:13 kafka | sasl.login.retry.backoff.ms = 100 15:01:13 kafka | sasl.mechanism.controller.protocol = GSSAPI 15:01:13 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 15:01:13 kafka | sasl.oauthbearer.clock.skew.seconds = 30 15:01:13 kafka | sasl.oauthbearer.expected.audience = null 15:01:13 kafka | sasl.oauthbearer.expected.issuer = null 15:01:13 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:13 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:13 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:13 kafka | sasl.oauthbearer.jwks.endpoint.url = null 15:01:13 kafka | sasl.oauthbearer.scope.claim.name = scope 15:01:13 kafka | sasl.oauthbearer.sub.claim.name = sub 15:01:13 kafka | sasl.oauthbearer.token.endpoint.url = null 15:01:13 kafka | sasl.server.callback.handler.class = null 15:01:13 kafka | sasl.server.max.receive.size = 524288 15:01:13 kafka | security.inter.broker.protocol = PLAINTEXT 15:01:13 kafka | security.providers = null 15:01:13 kafka | socket.connection.setup.timeout.max.ms = 30000 15:01:13 kafka | socket.connection.setup.timeout.ms = 10000 15:01:13 kafka | socket.listen.backlog.size = 50 15:01:13 kafka | socket.receive.buffer.bytes = 102400 15:01:13 kafka | socket.request.max.bytes = 104857600 15:01:13 kafka | socket.send.buffer.bytes = 102400 15:01:13 kafka | ssl.cipher.suites = [] 15:01:13 kafka | ssl.client.auth = none 15:01:13 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:13 kafka | ssl.endpoint.identification.algorithm = https 15:01:13 kafka | ssl.engine.factory.class = null 15:01:13 kafka | ssl.key.password = null 15:01:13 kafka | ssl.keymanager.algorithm = SunX509 15:01:13 kafka | ssl.keystore.certificate.chain = null 15:01:13 kafka | ssl.keystore.key = null 15:01:13 kafka | ssl.keystore.location = null 15:01:13 kafka | ssl.keystore.password = null 15:01:13 kafka | ssl.keystore.type = JKS 15:01:13 kafka | ssl.principal.mapping.rules = DEFAULT 15:01:13 kafka | ssl.protocol = TLSv1.3 15:01:13 kafka | ssl.provider = null 15:01:13 kafka | ssl.secure.random.implementation = null 15:01:13 kafka | ssl.trustmanager.algorithm = PKIX 15:01:13 kafka | ssl.truststore.certificates = null 15:01:13 kafka | ssl.truststore.location = null 15:01:13 kafka | ssl.truststore.password = null 15:01:13 kafka | ssl.truststore.type = JKS 15:01:13 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 15:01:13 kafka | transaction.max.timeout.ms = 900000 15:01:13 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 15:01:13 kafka | transaction.state.log.load.buffer.size = 5242880 15:01:13 kafka | transaction.state.log.min.isr = 2 15:01:13 kafka | transaction.state.log.num.partitions = 50 15:01:13 kafka | transaction.state.log.replication.factor = 3 15:01:13 kafka | transaction.state.log.segment.bytes = 104857600 15:01:13 kafka | transactional.id.expiration.ms = 604800000 15:01:13 kafka | unclean.leader.election.enable = false 15:01:13 kafka | zookeeper.clientCnxnSocket = null 15:01:13 kafka | zookeeper.connect = zookeeper:2181 15:01:13 kafka | zookeeper.connection.timeout.ms = null 15:01:13 kafka | zookeeper.max.in.flight.requests = 10 15:01:13 kafka | zookeeper.metadata.migration.enable = false 15:01:13 kafka | zookeeper.session.timeout.ms = 18000 15:01:13 kafka | zookeeper.set.acl = false 15:01:13 kafka | zookeeper.ssl.cipher.suites = null 15:01:13 kafka | zookeeper.ssl.client.enable = false 15:01:13 kafka | zookeeper.ssl.crl.enable = false 15:01:13 kafka | zookeeper.ssl.enabled.protocols = null 15:01:13 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 15:01:13 kafka | zookeeper.ssl.keystore.location = null 15:01:13 kafka | zookeeper.ssl.keystore.password = null 15:01:13 kafka | zookeeper.ssl.keystore.type = null 15:01:13 kafka | zookeeper.ssl.ocsp.enable = false 15:01:13 kafka | zookeeper.ssl.protocol = TLSv1.2 15:01:13 kafka | zookeeper.ssl.truststore.location = null 15:01:13 kafka | zookeeper.ssl.truststore.password = null 15:01:13 kafka | zookeeper.ssl.truststore.type = null 15:01:13 kafka | (kafka.server.KafkaConfig) 15:01:13 kafka | [2025-06-16 14:55:07,834] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:01:13 kafka | [2025-06-16 14:55:07,834] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:01:13 kafka | [2025-06-16 14:55:07,834] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:01:13 kafka | [2025-06-16 14:55:07,838] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:01:13 kafka | [2025-06-16 14:55:07,870] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:07,872] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:07,885] INFO Loaded 0 logs in 14ms. (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:07,885] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:07,887] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:07,896] INFO Starting the log cleaner (kafka.log.LogCleaner) 15:01:13 kafka | [2025-06-16 14:55:07,945] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 15:01:13 kafka | [2025-06-16 14:55:07,961] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 15:01:13 kafka | [2025-06-16 14:55:07,976] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 15:01:13 kafka | [2025-06-16 14:55:08,028] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 15:01:13 kafka | [2025-06-16 14:55:08,393] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 15:01:13 kafka | [2025-06-16 14:55:08,396] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 15:01:13 kafka | [2025-06-16 14:55:08,417] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 15:01:13 kafka | [2025-06-16 14:55:08,418] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 15:01:13 kafka | [2025-06-16 14:55:08,418] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 15:01:13 kafka | [2025-06-16 14:55:08,422] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 15:01:13 kafka | [2025-06-16 14:55:08,426] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 15:01:13 kafka | [2025-06-16 14:55:08,944] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:08,957] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:08,965] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:08,965] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:08,997] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 15:01:13 kafka | [2025-06-16 14:55:09,015] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 15:01:13 kafka | [2025-06-16 14:55:09,035] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750085709027,1750085709027,1,0,0,72057603493134337,258,0,27 15:01:13 kafka | (kafka.zk.KafkaZkClient) 15:01:13 kafka | [2025-06-16 14:55:09,035] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 15:01:13 kafka | [2025-06-16 14:55:09,093] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 15:01:13 kafka | [2025-06-16 14:55:09,103] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:09,107] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:09,108] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:09,115] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 15:01:13 kafka | [2025-06-16 14:55:09,123] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:09,124] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,127] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,127] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:09,133] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 15:01:13 kafka | [2025-06-16 14:55:09,143] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 15:01:13 kafka | [2025-06-16 14:55:09,147] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 15:01:13 kafka | [2025-06-16 14:55:09,152] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 15:01:13 kafka | [2025-06-16 14:55:09,168] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 15:01:13 kafka | [2025-06-16 14:55:09,168] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,173] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,177] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,179] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,182] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:01:13 kafka | [2025-06-16 14:55:09,195] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,200] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,205] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 15:01:13 kafka | [2025-06-16 14:55:09,210] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 15:01:13 kafka | [2025-06-16 14:55:09,219] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 15:01:13 kafka | [2025-06-16 14:55:09,219] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,219] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,220] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,220] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,224] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 15:01:13 kafka | [2025-06-16 14:55:09,225] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,225] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,226] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,227] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 15:01:13 kafka | [2025-06-16 14:55:09,228] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,233] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:09,251] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 15:01:13 kafka | [2025-06-16 14:55:09,254] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 15:01:13 kafka | [2025-06-16 14:55:09,254] INFO Kafka startTimeMs: 1750085709241 (org.apache.kafka.common.utils.AppInfoParser) 15:01:13 kafka | [2025-06-16 14:55:09,257] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 15:01:13 kafka | [2025-06-16 14:55:09,258] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,267] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 15:01:13 kafka | [2025-06-16 14:55:09,267] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,272] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,273] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,273] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,274] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,277] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 15:01:13 kafka | [2025-06-16 14:55:09,278] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,288] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,289] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,291] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,291] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,292] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,308] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:09,326] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:09,352] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 15:01:13 kafka | [2025-06-16 14:55:09,359] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 15:01:13 kafka | [2025-06-16 14:55:14,309] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:14,310] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:57,058] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 15:01:13 kafka | [2025-06-16 14:55:57,058] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:57,060] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 15:01:13 kafka | [2025-06-16 14:55:57,065] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:57,108] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(0tGhbWOsSE6IPwILN0dlNw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(5cgXABVFQiGM78mGoDn3ig),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:57,110] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,115] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,123] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,124] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,266] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,266] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,266] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,266] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,266] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,266] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,267] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,268] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,270] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,271] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,272] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,273] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,276] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,277] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,277] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,279] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,283] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,284] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,285] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,322] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,323] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,324] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 15:01:13 kafka | [2025-06-16 14:55:57,324] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,375] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,387] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,388] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,389] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,390] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,403] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,405] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,405] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,405] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,405] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,411] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,412] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,412] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,412] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,412] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,419] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,419] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,419] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,419] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,419] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,428] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,428] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,428] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,428] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,429] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,437] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,438] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,438] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,438] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,438] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,445] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,445] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,445] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,445] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,445] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,451] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,452] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,452] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,452] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,452] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,457] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,458] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,458] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,458] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,458] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,465] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,466] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,466] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,466] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,466] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,472] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,473] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,473] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,473] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,473] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,479] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,480] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,480] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,480] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,480] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,486] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,487] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,487] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,487] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,487] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,493] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,493] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,493] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,493] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,493] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,499] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,500] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,500] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,500] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,500] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,507] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,508] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,508] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,508] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,508] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,516] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,517] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,517] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,517] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,517] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,527] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,528] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,528] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,528] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,528] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,535] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,535] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,535] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,535] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,535] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,542] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,543] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,543] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,543] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,543] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,550] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,551] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,551] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,551] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,551] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,558] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,559] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,559] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,559] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,559] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,565] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,566] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,566] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,566] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,566] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,573] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,574] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,574] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,574] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,574] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,580] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,581] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,581] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,581] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,581] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,588] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,588] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,588] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,588] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,588] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,595] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,595] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,595] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,595] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,595] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,602] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,603] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,603] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,603] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,603] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,609] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,610] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,610] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,610] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,610] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,616] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,617] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,617] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,617] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,617] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,624] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,624] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,624] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,624] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,624] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,634] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,635] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,635] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,635] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,635] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,642] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,642] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,642] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,642] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,642] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,648] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,648] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,648] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,648] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,648] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,655] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,656] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,656] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,656] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,656] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(0tGhbWOsSE6IPwILN0dlNw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,663] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,663] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,664] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,664] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,664] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,671] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,671] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,671] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,672] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,672] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,678] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,679] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,679] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,679] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,679] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,686] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,687] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,687] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,687] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,687] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,693] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,693] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,693] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,693] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,693] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,700] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,701] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,701] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,701] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,701] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,708] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,708] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,708] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,708] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,708] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,713] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,714] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,714] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,714] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,714] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,721] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,722] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,722] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,722] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,722] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,729] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,730] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,730] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,730] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,730] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,736] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,737] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,737] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,737] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,737] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,745] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,745] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,746] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,746] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,746] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,751] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,752] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,752] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,752] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,752] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,759] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,759] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,759] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,759] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,759] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,766] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,766] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,766] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,767] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,767] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,773] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:55:57,773] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:55:57,774] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,774] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:55:57,774] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(5cgXABVFQiGM78mGoDn3ig) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,781] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,787] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,790] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,793] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,794] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,795] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,796] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,797] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,797] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,797] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,797] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,797] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,798] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,799] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,800] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,800] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,800] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,802] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,803] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [Broker id=1] Finished LeaderAndIsr request in 520ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,804] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,805] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,806] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,808] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,809] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,809] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:01:13 kafka | [2025-06-16 14:55:57,811] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=5cgXABVFQiGM78mGoDn3ig, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=0tGhbWOsSE6IPwILN0dlNw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,818] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,819] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,820] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,821] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,822] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,822] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,822] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,822] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,823] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,823] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:55:57,914] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,930] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,942] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 477ccfe3-c295-43ff-8034-7aaaa0b17546 in Empty state. Created a new member id consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:55:57,946] INFO [GroupCoordinator 1]: Preparing to rebalance group 477ccfe3-c295-43ff-8034-7aaaa0b17546 in state PreparingRebalance with old generation 0 (__consumer_offsets-41) (reason: Adding new member consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee with group instance id None; client reason: need to re-join with the given member-id: consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:00,947] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:00,957] INFO [GroupCoordinator 1]: Stabilized group 477ccfe3-c295-43ff-8034-7aaaa0b17546 generation 1 (__consumer_offsets-41) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:00,973] INFO [GroupCoordinator 1]: Assignment received from leader consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee for group 477ccfe3-c295-43ff-8034-7aaaa0b17546 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:00,987] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:41,751] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-26cfa06a-a89a-4cfa-af93-f5b01e9430e2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:41,753] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-26cfa06a-a89a-4cfa-af93-f5b01e9430e2 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:44,756] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:56:44,760] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-26cfa06a-a89a-4cfa-af93-f5b01e9430e2 for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:57:52,526] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 15:01:13 kafka | [2025-06-16 14:57:52,541] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(aleKDLTuTZ6btGW4kqtDMw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:57:52,541] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 14:57:52,541] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,542] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,542] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,549] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,549] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,549] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,550] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,550] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,550] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,552] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,552] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,553] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,553] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 15:01:13 kafka | [2025-06-16 14:57:52,553] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,559] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:01:13 kafka | [2025-06-16 14:57:52,560] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 15:01:13 kafka | [2025-06-16 14:57:52,561] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:57:52,563] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 15:01:13 kafka | [2025-06-16 14:57:52,563] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(aleKDLTuTZ6btGW4kqtDMw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,566] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,567] INFO [Broker id=1] Finished LeaderAndIsr request in 16ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,568] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=aleKDLTuTZ6btGW4kqtDMw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,570] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,570] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 15:01:13 kafka | [2025-06-16 14:57:52,571] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:01:13 kafka | [2025-06-16 14:59:15,483] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-a4364eeb-b856-4339-871f-46855fa148e3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:15,485] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-a4364eeb-b856-4339-871f-46855fa148e3 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:18,487] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:18,491] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-a4364eeb-b856-4339-871f-46855fa148e3 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:18,610] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-a4364eeb-b856-4339-871f-46855fa148e3 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:18,611] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:18,614] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-a4364eeb-b856-4339-871f-46855fa148e3, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:41,321] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-409dca20-772a-4448-a8cd-3f7a0ef8d7d2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:41,322] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-409dca20-772a-4448-a8cd-3f7a0ef8d7d2 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:44,324] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:44,327] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-409dca20-772a-4448-a8cd-3f7a0ef8d7d2 for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:44,336] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-409dca20-772a-4448-a8cd-3f7a0ef8d7d2 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:44,336] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 14:59:44,337] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-409dca20-772a-4448-a8cd-3f7a0ef8d7d2, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:07,005] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-bb3bf249-6f83-404f-b5fa-324115ad2d79 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:07,006] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-bb3bf249-6f83-404f-b5fa-324115ad2d79 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:10,009] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:10,012] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-bb3bf249-6f83-404f-b5fa-324115ad2d79 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:10,022] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-bb3bf249-6f83-404f-b5fa-324115ad2d79 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:10,022] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:10,023] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-bb3bf249-6f83-404f-b5fa-324115ad2d79, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 15:01:13 kafka | [2025-06-16 15:00:14,314] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 15:00:14,314] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 15:00:14,320] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) 15:01:13 kafka | [2025-06-16 15:00:14,321] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) 15:01:14 policy-api | Waiting for policy-db-migrator port 6824... 15:01:14 policy-api | policy-db-migrator (172.17.0.7:6824) open 15:01:14 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 15:01:14 policy-api | 15:01:14 policy-api | . ____ _ __ _ _ 15:01:14 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 15:01:14 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 15:01:14 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 15:01:14 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 15:01:14 policy-api | =========|_|==============|___/=/_/_/_/ 15:01:14 policy-api | 15:01:14 policy-api | :: Spring Boot :: (v3.4.6) 15:01:14 policy-api | 15:01:14 policy-api | [2025-06-16T14:55:33.840+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 15:01:14 policy-api | [2025-06-16T14:55:33.928+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 61 (/app/api.jar started by policy in /opt/app/policy/api/bin) 15:01:14 policy-api | [2025-06-16T14:55:33.929+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 15:01:14 policy-api | [2025-06-16T14:55:35.605+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 15:01:14 policy-api | [2025-06-16T14:55:35.822+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 202 ms. Found 6 JPA repository interfaces. 15:01:14 policy-api | [2025-06-16T14:55:36.594+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 15:01:14 policy-api | [2025-06-16T14:55:36.610+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 15:01:14 policy-api | [2025-06-16T14:55:36.612+00:00|INFO|StandardService|main] Starting service [Tomcat] 15:01:14 policy-api | [2025-06-16T14:55:36.612+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 15:01:14 policy-api | [2025-06-16T14:55:36.659+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 15:01:14 policy-api | [2025-06-16T14:55:36.659+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2652 ms 15:01:14 policy-api | [2025-06-16T14:55:37.020+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 15:01:14 policy-api | [2025-06-16T14:55:37.111+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 15:01:14 policy-api | [2025-06-16T14:55:37.165+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 15:01:14 policy-api | [2025-06-16T14:55:37.602+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 15:01:14 policy-api | [2025-06-16T14:55:37.648+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 15:01:14 policy-api | [2025-06-16T14:55:37.885+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c 15:01:14 policy-api | [2025-06-16T14:55:37.890+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 15:01:14 policy-api | [2025-06-16T14:55:38.006+00:00|INFO|pooling|main] HHH10001005: Database info: 15:01:14 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 15:01:14 policy-api | Database driver: undefined/unknown 15:01:14 policy-api | Database version: 16.4 15:01:14 policy-api | Autocommit mode: undefined/unknown 15:01:14 policy-api | Isolation level: undefined/unknown 15:01:14 policy-api | Minimum pool size: undefined/unknown 15:01:14 policy-api | Maximum pool size: undefined/unknown 15:01:14 policy-api | [2025-06-16T14:55:40.230+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 15:01:14 policy-api | [2025-06-16T14:55:40.238+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 15:01:14 policy-api | [2025-06-16T14:55:40.984+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 15:01:14 policy-api | [2025-06-16T14:55:41.947+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 15:01:14 policy-api | [2025-06-16T14:55:43.102+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 15:01:14 policy-api | [2025-06-16T14:55:43.148+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 15:01:14 policy-api | [2025-06-16T14:55:43.881+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 15:01:14 policy-api | [2025-06-16T14:55:44.026+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 15:01:14 policy-api | [2025-06-16T14:55:44.046+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 15:01:14 policy-api | [2025-06-16T14:55:44.069+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.961 seconds (process running for 11.594) 15:01:14 policy-api | [2025-06-16T14:56:39.927+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 15:01:14 policy-api | [2025-06-16T14:56:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 15:01:14 policy-api | [2025-06-16T14:56:39.929+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 15:01:14 policy-api | [2025-06-16T14:58:53.376+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-5] ***** OrderedServiceImpl implementers: 15:01:14 policy-api | [] 15:01:14 policy-api | [2025-06-16T15:00:10.387+00:00|WARN|CommonRestController|http-nio-6969-exec-8] "incoming fragment" INVALID, item has status INVALID 15:01:14 policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity 15:01:14 policy-api | 15:01:14 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 15:01:14 policy-csit | Run Robot test 15:01:14 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 15:01:14 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 15:01:14 policy-csit | -v POLICY_API_IP:policy-api:6969 15:01:14 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 15:01:14 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 15:01:14 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 15:01:14 policy-csit | -v APEX_IP:policy-apex-pdp:6969 15:01:14 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 15:01:14 policy-csit | -v KAFKA_IP:kafka:9092 15:01:14 policy-csit | -v PROMETHEUS_IP:prometheus:9090 15:01:14 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 15:01:14 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 15:01:14 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 15:01:14 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 15:01:14 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 15:01:14 policy-csit | -v TEMP_FOLDER:/tmp/distribution 15:01:14 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 15:01:14 policy-csit | -v TEST_ENV:docker 15:01:14 policy-csit | -v JAEGER_IP:jaeger:16686 15:01:14 policy-csit | Starting Robot test suites ... 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidatesZonePolicy | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidatesVehiclePolicy | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidatesAbacPolicy | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 15:01:14 policy-csit | 5 tests, 5 passed, 0 failed 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 15:01:14 policy-csit | ------------------------------------------------------------------------------ 15:01:14 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 15:01:14 policy-csit | 5 tests, 5 passed, 0 failed 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 15:01:14 policy-csit | 10 tests, 10 passed, 0 failed 15:01:14 policy-csit | ============================================================================== 15:01:14 policy-csit | Output: /tmp/results/output.xml 15:01:14 policy-csit | Log: /tmp/results/log.html 15:01:14 policy-csit | Report: /tmp/results/report.html 15:01:14 policy-csit | RESULT: 0 15:01:14 policy-db-migrator | Waiting for postgres port 5432... 15:01:14 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:01:14 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:01:14 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:01:14 policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! 15:01:14 policy-db-migrator | Initializing policyadmin... 15:01:14 policy-db-migrator | 321 blocks 15:01:14 policy-db-migrator | Preparing upgrade release version: 0800 15:01:14 policy-db-migrator | Preparing upgrade release version: 0900 15:01:14 policy-db-migrator | Preparing upgrade release version: 1000 15:01:14 policy-db-migrator | Preparing upgrade release version: 1100 15:01:14 policy-db-migrator | Preparing upgrade release version: 1200 15:01:14 policy-db-migrator | Preparing upgrade release version: 1300 15:01:14 policy-db-migrator | Done 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | -------------+--------- 15:01:14 policy-db-migrator | policyadmin | 0 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:01:14 policy-db-migrator | (0 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | upgrade: 0 -> 1300 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0450-pdpgroup.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0470-pdp.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0570-toscadatatype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0630-toscanodetype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0660-toscaparameter.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0670-toscapolicies.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0690-toscapolicy.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0730-toscaproperty.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0770-toscarequirement.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0780-toscarequirements.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0820-toscatrigger.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-pdp.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0210-sequence.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0220-sequence.sql 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0120-toscatrigger.sql 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0140-toscaparameter.sql 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0150-toscaproperty.sql 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-upgrade.sql 15:01:14 policy-db-migrator | msg 15:01:14 policy-db-migrator | --------------------------- 15:01:14 policy-db-migrator | upgrade to 1100 completed 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 15:01:14 policy-db-migrator | DROP INDEX 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0120-audit_sequence.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 15:01:14 policy-db-migrator | DROP TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | policyadmin: OK: upgrade (1300) 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | -------------+--------- 15:01:14 policy-db-migrator | policyadmin | 1300 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:01:14 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.081157 15:01:14 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.124043 15:01:14 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.171597 15:01:14 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.228435 15:01:14 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.283188 15:01:14 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.328811 15:01:14 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.38136 15:01:14 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.511545 15:01:14 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:08.98283 15:01:14 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.037467 15:01:14 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.084835 15:01:14 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.130937 15:01:14 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.207342 15:01:14 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.245737 15:01:14 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.289757 15:01:14 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.340644 15:01:14 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.38641 15:01:14 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.438623 15:01:14 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.489983 15:01:14 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.534872 15:01:14 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.581058 15:01:14 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.627618 15:01:14 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.670185 15:01:14 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.721263 15:01:14 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.771139 15:01:14 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.82259 15:01:14 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.879449 15:01:14 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.928728 15:01:14 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:09.977658 15:01:14 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.032501 15:01:14 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.084645 15:01:14 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.141291 15:01:14 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.191273 15:01:14 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.242852 15:01:14 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.307029 15:01:14 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.357874 15:01:14 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.503384 15:01:14 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.562526 15:01:14 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.613381 15:01:14 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.666098 15:01:14 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.713101 15:01:14 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.769141 15:01:14 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.823474 15:01:14 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.878258 15:01:14 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.938009 15:01:14 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:10.991177 15:01:14 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:11.039555 15:01:14 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:11.105352 15:01:14 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:11.510803 15:01:14 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.527549 15:01:14 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.583854 15:01:14 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.636135 15:01:14 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.680497 15:01:14 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.732386 15:01:14 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.787687 15:01:14 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.841734 15:01:14 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.899266 15:01:14 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:23.952799 15:01:14 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.008185 15:01:14 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.059444 15:01:14 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.116759 15:01:14 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.170993 15:01:14 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.225328 15:01:14 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.283822 15:01:14 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.338099 15:01:14 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.389951 15:01:14 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.441833 15:01:14 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.493768 15:01:14 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.548378 15:01:14 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.601681 15:01:14 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.657152 15:01:14 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.712303 15:01:14 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.774541 15:01:14 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.825735 15:01:14 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.883509 15:01:14 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:24.936579 15:01:14 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.001865 15:01:14 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.058133 15:01:14 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.108873 15:01:14 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.158239 15:01:14 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.212556 15:01:14 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.269217 15:01:14 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.325104 15:01:14 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.381149 15:01:14 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.436923 15:01:14 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.492755 15:01:14 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.545816 15:01:14 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.602948 15:01:14 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.658541 15:01:14 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.713037 15:01:14 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.766449 15:01:14 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.81674 15:01:14 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.867744 15:01:14 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.919298 15:01:14 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:25.965847 15:01:14 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251455080800u | 1 | 2025-06-16 14:55:26.020398 15:01:14 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.071648 15:01:14 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.126098 15:01:14 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.178817 15:01:14 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.238738 15:01:14 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.295915 15:01:14 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.354015 15:01:14 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.405753 15:01:14 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.461508 15:01:14 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.515741 15:01:14 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.572201 15:01:14 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.624639 15:01:14 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.680873 15:01:14 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606251455080900u | 1 | 2025-06-16 14:55:26.735174 15:01:14 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:26.788247 15:01:14 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:26.84911 15:01:14 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:26.9053 15:01:14 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:26.965517 15:01:14 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:27.019813 15:01:14 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:27.078004 15:01:14 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:27.129658 15:01:14 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:27.189157 15:01:14 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606251455081000u | 1 | 2025-06-16 14:55:27.238713 15:01:14 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606251455081100u | 1 | 2025-06-16 14:55:27.290814 15:01:14 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606251455081200u | 1 | 2025-06-16 14:55:27.343589 15:01:14 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606251455081200u | 1 | 2025-06-16 14:55:27.401837 15:01:14 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606251455081200u | 1 | 2025-06-16 14:55:27.459938 15:01:14 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606251455081200u | 1 | 2025-06-16 14:55:27.529232 15:01:14 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606251455081300u | 1 | 2025-06-16 14:55:27.585367 15:01:14 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606251455081300u | 1 | 2025-06-16 14:55:27.639336 15:01:14 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606251455081300u | 1 | 2025-06-16 14:55:27.695608 15:01:14 policy-db-migrator | (126 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | policyadmin: OK @ 1300 15:01:14 policy-db-migrator | Initializing clampacm... 15:01:14 policy-db-migrator | 97 blocks 15:01:14 policy-db-migrator | Preparing upgrade release version: 1400 15:01:14 policy-db-migrator | Preparing upgrade release version: 1500 15:01:14 policy-db-migrator | Preparing upgrade release version: 1600 15:01:14 policy-db-migrator | Preparing upgrade release version: 1601 15:01:14 policy-db-migrator | Preparing upgrade release version: 1700 15:01:14 policy-db-migrator | Preparing upgrade release version: 1701 15:01:14 policy-db-migrator | Done 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | ----------+--------- 15:01:14 policy-db-migrator | clampacm | 0 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:01:14 policy-db-migrator | (0 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | upgrade: 0 -> 1701 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0500-participant.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0300-participantreplica.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0400-participant.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-message.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-messagejob.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0200-automationcomposition.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0800-participantreplica.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | UPDATE 0 15:01:14 policy-db-migrator | ALTER TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | clampacm: OK: upgrade (1701) 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | ----------+--------- 15:01:14 policy-db-migrator | clampacm | 1701 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:01:14 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.381578 15:01:14 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.440118 15:01:14 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.500013 15:01:14 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.560018 15:01:14 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.617522 15:01:14 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.673079 15:01:14 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.730318 15:01:14 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.785912 15:01:14 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.840773 15:01:14 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.897226 15:01:14 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:28.95188 15:01:14 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:29.008678 15:01:14 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606251455281400u | 1 | 2025-06-16 14:55:29.063974 15:01:14 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.117515 15:01:14 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.17235 15:01:14 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.236138 15:01:14 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.291156 15:01:14 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.347078 15:01:14 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.404723 15:01:14 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.454016 15:01:14 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606251455281500u | 1 | 2025-06-16 14:55:29.506339 15:01:14 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606251455281600u | 1 | 2025-06-16 14:55:29.558854 15:01:14 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606251455281600u | 1 | 2025-06-16 14:55:29.611724 15:01:14 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606251455281601u | 1 | 2025-06-16 14:55:29.66421 15:01:14 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606251455281601u | 1 | 2025-06-16 14:55:29.718473 15:01:14 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606251455281700u | 1 | 2025-06-16 14:55:29.779955 15:01:14 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606251455281700u | 1 | 2025-06-16 14:55:29.834601 15:01:14 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606251455281700u | 1 | 2025-06-16 14:55:29.889508 15:01:14 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:29.947396 15:01:14 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.008077 15:01:14 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.059012 15:01:14 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.117136 15:01:14 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.173392 15:01:14 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.230309 15:01:14 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.282088 15:01:14 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.341838 15:01:14 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606251455281701u | 1 | 2025-06-16 14:55:30.398099 15:01:14 policy-db-migrator | (37 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | clampacm: OK @ 1701 15:01:14 policy-db-migrator | Initializing pooling... 15:01:14 policy-db-migrator | 4 blocks 15:01:14 policy-db-migrator | Preparing upgrade release version: 1600 15:01:14 policy-db-migrator | Done 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | ---------+--------- 15:01:14 policy-db-migrator | pooling | 0 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:01:14 policy-db-migrator | (0 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | pooling: upgrade available: 0 -> 1600 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | upgrade: 0 -> 1600 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-distributed.locking.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | pooling: OK: upgrade (1600) 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | ---------+--------- 15:01:14 policy-db-migrator | pooling | 1600 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:01:14 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606251455311600u | 1 | 2025-06-16 14:55:31.078825 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | pooling: OK @ 1600 15:01:14 policy-db-migrator | Initializing operationshistory... 15:01:14 policy-db-migrator | 6 blocks 15:01:14 policy-db-migrator | Preparing upgrade release version: 1600 15:01:14 policy-db-migrator | Done 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | -------------------+--------- 15:01:14 policy-db-migrator | operationshistory | 0 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:01:14 policy-db-migrator | (0 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | upgrade: 0 -> 1600 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | rc=0 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | > upgrade 0110-operationshistory.sql 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | CREATE INDEX 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | INSERT 0 1 15:01:14 policy-db-migrator | operationshistory: OK: upgrade (1600) 15:01:14 policy-db-migrator | List of databases 15:01:14 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:01:14 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:01:14 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:01:14 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:01:14 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:01:14 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:01:14 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:01:14 policy-db-migrator | (9 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | CREATE TABLE 15:01:14 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 15:01:14 policy-db-migrator | name | version 15:01:14 policy-db-migrator | -------------------+--------- 15:01:14 policy-db-migrator | operationshistory | 1600 15:01:14 policy-db-migrator | (1 row) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:01:14 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:01:14 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606251455311600u | 1 | 2025-06-16 14:55:31.752782 15:01:14 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606251455311600u | 1 | 2025-06-16 14:55:31.830844 15:01:14 policy-db-migrator | (2 rows) 15:01:14 policy-db-migrator | 15:01:14 policy-db-migrator | operationshistory: OK @ 1600 15:01:14 policy-opa-pdp | Waiting for kafka port 9092... 15:01:14 policy-opa-pdp | nc: connect to kafka (172.17.0.6) port 9092 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | Connection to kafka (172.17.0.6) 9092 port [tcp/*] succeeded! 15:01:14 policy-opa-pdp | Waiting for pap port 6969... 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:01:14 policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=debug msg="###################################### " 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=debug msg="OPA-PDP: Starting initialisation " 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=debug msg="###################################### " 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="KAFKA_URL not defined, using default value" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="PAP_TOPIC not defined, using default value" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="PATCH_TOPIC not defined, using default value" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="PATCH_GROUPID not defined, using default value" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="API_USER not defined, using default value" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="API_PASSWORD not defined, using default value" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="UseSASLForKAFKA not defined, using default value" 15:01:14 policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=debug msg="Username: " 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=debug msg="Password: " 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" 15:01:14 policy-opa-pdp | time="2025-06-16T14:56:36Z" level=debug msg="Configuration module: environment initialised" 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:36.7184+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:36.7188+00:00] Name: opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:36.7225+00:00] Starting OPA PDP Service 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:56:41.7268+00:00] HTTP server started 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:41.7282+00:00] Create an instance of OPA Object 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:41.7284+00:00] Configure an instance of OPA Object 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:41.7297+00:00] Topic start :::: policy-pdp-pap 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:41.7298+00:00] Creating Kafka Consumer singleton instance 15:01:14 policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-16T14:56:41.7328+00:00] Topic Subscribed: policy-pdp-pap 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:41.7329+00:00] Created SIngleton consumer instance 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:41.7399+00:00] Starting PDP Message Listener..... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:56:51.7493+00:00] New Ticker started with interval 60000 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:01.7593+00:00] After registration successful delay 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:51.7751+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6460956e-4a01-4c4a-bb69-b73593813620","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750085871774","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:51.7751+00:00] Sending Heartbeat ... 15:01:14 policy-opa-pdp | 2025/06/16 14:57:51 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:51.8014+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6460956e-4a01-4c4a-bb69-b73593813620","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750085871774","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:51.8016+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:51.8017+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4488+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"28474638-9707-4bbd-8aaa-789bc1608bfe","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4491+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4494+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"28474638-9707-4bbd-8aaa-789bc1608bfe","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4495+00:00] Policy Is Allowed: slice.capacity.check 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4495+00:00] Validating properties data for policy: slice.capacity.check 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4496+00:00] Validating properties policy for policy: slice.capacity.check 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4496+00:00] Validation successful for policy: slice.capacity.check 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4499+00:00] Directory created: /opt/policies/slice/capacity/check 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4500+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4502+00:00] Directory created: /opt/data/node/slice/capacity/check 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4503+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4503+00:00] Before calling combinedoutput 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4723+00:00] Bundle Built Sucessfully.... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4751+00:00] storage not found creating : /node 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4753+00:00] storage not found creating : /node/slice 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4755+00:00] storage not found creating : /node/slice/capacity 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4757+00:00] storage not found creating : /node/slice/capacity/check 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4759+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4761+00:00] Loaded Policy: slice.capacity.check 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4762+00:00] Processed policies_to_be_deployed successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4764+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | 2025/06/16 14:57:52 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4768+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"28474638-9707-4bbd-8aaa-789bc1608bfe","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"0b2d8de5-4382-46f5-be2d-0fc1a236eff5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872476","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.4770+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4771+00:00] 120000 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4773+00:00] New Ticker started with interval 120000 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4867+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"28474638-9707-4bbd-8aaa-789bc1608bfe","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"0b2d8de5-4382-46f5-be2d-0fc1a236eff5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872476","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4868+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.4868+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5236+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a72fc544-387e-487c-a1ea-700fb123d178","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5236+00:00] messageType: PDP_STATE_CHANGE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5237+00:00] PDP STATE CHANGE message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a72fc544-387e-487c-a1ea-700fb123d178","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5237+00:00] State change from PASSIVE To : ACTIVE 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.5237+00:00] Sending PDP Status With State Change response 15:01:14 policy-opa-pdp | 2025/06/16 14:57:52 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5238+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a72fc544-387e-487c-a1ea-700fb123d178","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"92b067b6-f5ea-4d41-a00e-ac6e59e5195a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872523","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.5238+00:00] PDP_STATUS With State Change Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5322+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a72fc544-387e-487c-a1ea-700fb123d178","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"92b067b6-f5ea-4d41-a00e-ac6e59e5195a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872523","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5323+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.5323+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9129+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"029327b3-0b15-42c8-9721-7c452af01084","timestampMs":1750085872893,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9130+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9132+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"029327b3-0b15-42c8-9721-7c452af01084","timestampMs":1750085872893,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.9133+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9134+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"029327b3-0b15-42c8-9721-7c452af01084","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"f9020cca-78a1-43b9-9aa6-fc6c4bc9e4fa","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872913","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:57:52.9134+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9134+00:00] 120000 15:01:14 policy-opa-pdp | 2025/06/16 14:57:52 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9218+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"029327b3-0b15-42c8-9721-7c452af01084","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"f9020cca-78a1-43b9-9aa6-fc6c4bc9e4fa","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872913","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9218+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:57:52.9218+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | 2025/06/16 14:58:51 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:51.7789+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c423da31-fbdd-412b-a24a-3e2793263d69","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085931778","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:51.7790+00:00] Sending Heartbeat ... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:51.7893+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c423da31-fbdd-412b-a24a-3e2793263d69","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085931778","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:51.7895+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:51.7895+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | WARN[2025-06-16T14:58:53.1321+00:00] Invalid or Missing Request ID 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:53.1322+00:00] Received Health Check message 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:53.1396+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:53.1397+00:00] datapath to get Data : / 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:53.1399+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4421+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39b5658c-5e3b-4103-b789-ca3be64640d6","timestampMs":1750085934380,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4423+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4425+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39b5658c-5e3b-4103-b789-ca3be64640d6","timestampMs":1750085934380,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4425+00:00] Check if Policy is Already Deployed: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4434+00:00] Policy is new and should be deployed: zoneB 1.0.6 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4434+00:00] Policy Is Allowed: zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4434+00:00] Validating properties data for policy: zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4434+00:00] Validating properties policy for policy: zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4435+00:00] Validation successful for policy: zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4437+00:00] Directory created: /opt/policies/zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4438+00:00] Policy file saved: /opt/policies/zoneB/policy.rego 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4440+00:00] Directory created: /opt/data/node/zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4440+00:00] Data file saved: /opt/data/node/zoneB/data.json 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4441+00:00] Before calling combinedoutput 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4699+00:00] Bundle Built Sucessfully.... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4734+00:00] storage not found creating : /node/zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4736+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.zoneB" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "zoneB" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "zoneB", 15:01:14 policy-opa-pdp | "policy-version": "1.0.6" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4736+00:00] Loaded Policy: zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4736+00:00] Processed policies_to_be_deployed successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4737+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | 2025/06/16 14:58:54 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4738+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39b5658c-5e3b-4103-b789-ca3be64640d6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2dd33f87-288a-4a72-87c4-a1b89ba21549","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085934473","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:58:54.4738+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4738+00:00] 0 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4831+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39b5658c-5e3b-4103-b789-ca3be64640d6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2dd33f87-288a-4a72-87c4-a1b89ba21549","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085934473","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4831+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:58:54.4832+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.6360+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6363+00:00] datapath to get Data : /node/zoneB/zone 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6365+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6510+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6511+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6515+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6515+00:00] SDK making a decision 15:01:14 policy-opa-pdp | {"decision_id":"a600bc77-9e35-4ac9-9223-5102c4f8fb29","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"a5cc5313-c8d9-4203-bac1-aa4aec533000","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":720,"timer_rego_query_compile_ns":130681,"timer_rego_query_eval_ns":487655,"timer_rego_query_parse_ns":88481,"timer_sdk_decision_eval_ns":897660},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-16T14:59:18Z","timestamp":"2025-06-16T14:59:18.651622402Z","type":"openpolicyagent.org/decision_logs"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6531+00:00] RAW opa Decision output: 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "ID": "a600bc77-9e35-4ac9-9223-5102c4f8fb29", 15:01:14 policy-opa-pdp | "Result": { 15:01:14 policy-opa-pdp | "action_is_log_view": true, 15:01:14 policy-opa-pdp | "allow": true, 15:01:14 policy-opa-pdp | "has_zone_access": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "access": "granted", 15:01:14 policy-opa-pdp | "user": "user1" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | "Provenance": { 15:01:14 policy-opa-pdp | "version": "1.1.0", 15:01:14 policy-opa-pdp | "build_commit": "", 15:01:14 policy-opa-pdp | "build_timestamp": "", 15:01:14 policy-opa-pdp | "build_hostname": "" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6624+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6625+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6629+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | WARN[2025-06-16T14:59:18.6631+00:00] Policy Name zoeB does not exist 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6706+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6706+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6708+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6709+00:00] SDK making a decision 15:01:14 policy-opa-pdp | {"decision_id":"8f47e947-c4d0-4746-ae42-b27daae2d187","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"a5cc5313-c8d9-4203-bac1-aa4aec533000","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":600,"timer_rego_query_eval_ns":301743,"timer_sdk_decision_eval_ns":373404},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-16T14:59:18Z","timestamp":"2025-06-16T14:59:18.671026237Z","type":"openpolicyagent.org/decision_logs"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.6715+00:00] RAW opa Decision output: 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "ID": "8f47e947-c4d0-4746-ae42-b27daae2d187", 15:01:14 policy-opa-pdp | "Result": { 15:01:14 policy-opa-pdp | "action_is_log_view": true, 15:01:14 policy-opa-pdp | "allow": true, 15:01:14 policy-opa-pdp | "has_zone_access": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "access": "granted", 15:01:14 policy-opa-pdp | "user": "user1" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | "Provenance": { 15:01:14 policy-opa-pdp | "version": "1.1.0", 15:01:14 policy-opa-pdp | "build_commit": "", 15:01:14 policy-opa-pdp | "build_timestamp": "", 15:01:14 policy-opa-pdp | "build_hostname": "" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9952+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"2f4ee97e-e981-4cd6-90fa-505790333b58","timestampMs":1750085958960,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9953+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9955+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"2f4ee97e-e981-4cd6-90fa-505790333b58","timestampMs":1750085958960,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.9955+00:00] Found Policies to be undeployed 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.9955+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9955+00:00] Deleting Policy from OPA : /zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9973+00:00] Removing policy directory: /opt/policies/zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9975+00:00] Deleting data from OPA : /node/zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9975+00:00] Analyzing dataPath: /node/zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9976+00:00] Path segments: [ node zoneB] 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9976+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9976+00:00] Removing data directory: /opt/data/node/zoneB 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.9977+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9977+00:00] Policies Map After Undeployment : { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.9977+00:00] Processed policies_to_be_undeployed successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.9978+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | 2025/06/16 14:59:18 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9978+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"2f4ee97e-e981-4cd6-90fa-505790333b58","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2ab93fff-b361-4ed3-b748-e7e615842adb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085958997","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:18.9978+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:18.9979+00:00] 0 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:19.0061+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"2f4ee97e-e981-4cd6-90fa-505790333b58","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2ab93fff-b361-4ed3-b748-e7e615842adb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085958997","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:19.0062+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:19.0062+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2798+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ee93bcaf-6615-418e-922a-eb845e0869a2","timestampMs":1750085960250,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2800+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2802+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ee93bcaf-6615-418e-922a-eb845e0869a2","timestampMs":1750085960250,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2802+00:00] Check if Policy is Already Deployed: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.2802+00:00] Policy is new and should be deployed: vehicle 1.0.6 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2802+00:00] Policy Is Allowed: vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2802+00:00] Validating properties data for policy: vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2802+00:00] Validating properties policy for policy: vehicle 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.2803+00:00] Validation successful for policy: vehicle 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.2804+00:00] Directory created: /opt/policies/vehicle 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.2804+00:00] Policy file saved: /opt/policies/vehicle/policy.rego 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.2805+00:00] Directory created: /opt/data/node/vehicle 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.2805+00:00] Data file saved: /opt/data/node/vehicle/data.json 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.2805+00:00] Before calling combinedoutput 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3037+00:00] Bundle Built Sucessfully.... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3070+00:00] storage not found creating : /node/vehicle 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.3071+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.vehicle" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "vehicle" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "vehicle", 15:01:14 policy-opa-pdp | "policy-version": "1.0.6" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3071+00:00] Loaded Policy: vehicle 15:01:14 policy-opa-pdp | 2025/06/16 14:59:20 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.3071+00:00] Processed policies_to_be_deployed successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.3071+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3072+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee93bcaf-6615-418e-922a-eb845e0869a2","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"051d5acf-ecb1-44ff-bed7-8c5ccea8992f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085960307","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:20.3072+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3072+00:00] 0 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3163+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee93bcaf-6615-418e-922a-eb845e0869a2","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"051d5acf-ecb1-44ff-bed7-8c5ccea8992f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085960307","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3163+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:20.3163+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3595+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3599+00:00] datapath to get Data : /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3600+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3703+00:00] PDP received a request to update data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3707+00:00] All fields are valid! 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3708+00:00] data : [map[op:add path:/round value:trail]] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3708+00:00] policy name : vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3709+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3709+00:00] dirParts : [ node vehicle] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3710+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3710+00:00] root: /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3711+00:00] path : round 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3711+00:00] calling ParsePatchPathEscaped to check the path 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3711+00:00] No path conflicts detected 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3711+00:00] Updated the data in the corresponding path successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3785+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3787+00:00] datapath to get Data : /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3788+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3884+00:00] PDP received a request to update data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3889+00:00] All fields are valid! 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3890+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3890+00:00] policy name : vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3891+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3891+00:00] dirParts : [ node vehicle] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3892+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3892+00:00] root: /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3892+00:00] path : round 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3893+00:00] calling ParsePatchPathEscaped to check the path 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3893+00:00] No path conflicts detected 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3893+00:00] Updated the data in the corresponding path successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.3959+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3959+00:00] datapath to get Data : /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.3959+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4056+00:00] PDP received a request to update data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4059+00:00] All fields are valid! 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4060+00:00] data : [map[op:remove path:/round]] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4060+00:00] policy name : vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4061+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4061+00:00] dirParts : [ node vehicle] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4062+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4062+00:00] root: /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4062+00:00] path : round 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4062+00:00] calling ParsePatchPathEscaped to check the path 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4063+00:00] No path conflicts detected 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4063+00:00] Updated the data in the corresponding path successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.4127+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4127+00:00] datapath to get Data : /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4128+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4227+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4228+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4230+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4231+00:00] SDK making a decision 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4243+00:00] RAW opa Decision output: 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "ID": "96cd53f4-4380-4d0c-9eef-baca6bbbf8b9", 15:01:14 policy-opa-pdp | "Result": { 15:01:14 policy-opa-pdp | "action_is_granted": true, 15:01:14 policy-opa-pdp | "allow": true, 15:01:14 policy-opa-pdp | "user_has_vehicle_access": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "status": "available", 15:01:14 policy-opa-pdp | "type": "car" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | "Provenance": { 15:01:14 policy-opa-pdp | "version": "1.1.0", 15:01:14 policy-opa-pdp | "build_commit": "", 15:01:14 policy-opa-pdp | "build_timestamp": "", 15:01:14 policy-opa-pdp | "build_hostname": "" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | {"decision_id":"96cd53f4-4380-4d0c-9eef-baca6bbbf8b9","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"a5cc5313-c8d9-4203-bac1-aa4aec533000","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1040,"timer_rego_query_compile_ns":158902,"timer_rego_query_eval_ns":385424,"timer_rego_query_parse_ns":86141,"timer_sdk_decision_eval_ns":844399},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-16T14:59:44Z","timestamp":"2025-06-16T14:59:44.423159542Z","type":"openpolicyagent.org/decision_logs"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4317+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4318+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4324+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | WARN[2025-06-16T14:59:44.4326+00:00] Policy Name vehile does not exist 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4395+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4396+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4398+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4399+00:00] SDK making a decision 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.4406+00:00] RAW opa Decision output: 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "ID": "6056efaa-ad6b-47de-8477-c0d6f9aa7340", 15:01:14 policy-opa-pdp | "Result": { 15:01:14 policy-opa-pdp | "action_is_granted": true, 15:01:14 policy-opa-pdp | "allow": true, 15:01:14 policy-opa-pdp | "user_has_vehicle_access": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "status": "available", 15:01:14 policy-opa-pdp | "type": "car" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | {"decision_id":"6056efaa-ad6b-47de-8477-c0d6f9aa7340","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"a5cc5313-c8d9-4203-bac1-aa4aec533000","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1080,"timer_rego_query_eval_ns":417105,"timer_sdk_decision_eval_ns":505595},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-16T14:59:44Z","timestamp":"2025-06-16T14:59:44.439960513Z","type":"openpolicyagent.org/decision_logs"} 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | "Provenance": { 15:01:14 policy-opa-pdp | "version": "1.1.0", 15:01:14 policy-opa-pdp | "build_commit": "", 15:01:14 policy-opa-pdp | "build_timestamp": "", 15:01:14 policy-opa-pdp | "build_hostname": "" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7612+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"c7ac162c-8487-4b7a-8206-a9bed30842a0","timestampMs":1750085984736,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7614+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7615+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"c7ac162c-8487-4b7a-8206-a9bed30842a0","timestampMs":1750085984736,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.7615+00:00] Found Policies to be undeployed 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.7615+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7616+00:00] Deleting Policy from OPA : /vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7642+00:00] Removing policy directory: /opt/policies/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7645+00:00] Deleting data from OPA : /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7646+00:00] Analyzing dataPath: /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7646+00:00] Path segments: [ node vehicle] 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7646+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7646+00:00] Removing data directory: /opt/data/node/vehicle 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.7648+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7648+00:00] Policies Map After Undeployment : { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.7648+00:00] Processed policies_to_be_undeployed successfully 15:01:14 policy-opa-pdp | 2025/06/16 14:59:44 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.7649+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7650+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c7ac162c-8487-4b7a-8206-a9bed30842a0","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6aea1fe4-f501-464d-bc72-e2c87fc025da","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085984764","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:44.7650+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7650+00:00] 0 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7716+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c7ac162c-8487-4b7a-8206-a9bed30842a0","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6aea1fe4-f501-464d-bc72-e2c87fc025da","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085984764","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7716+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:44.7716+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.2152+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.2153+00:00] datapath to get Data : /node/vehicle 15:01:14 policy-opa-pdp | WARN[2025-06-16T14:59:45.2153+00:00] Error in reading data under /node/vehicle path 15:01:14 policy-opa-pdp | ERRO[2025-06-16T14:59:45.2153+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.2255+00:00] PDP received a request to update data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.2258+00:00] All fields are valid! 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.2258+00:00] data : [map[op:remove path:/round]] 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.2258+00:00] policy name : vehicle 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.2260+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] 15:01:14 policy-opa-pdp | ERRO[2025-06-16T14:59:45.2260+00:00] Policy associated with the patch request does not exists 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9680+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"967b7747-4de9-4350-a28c-de20927bd02f","timestampMs":1750085985947,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9683+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9686+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"967b7747-4de9-4350-a28c-de20927bd02f","timestampMs":1750085985947,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9686+00:00] Check if Policy is Already Deployed: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.9687+00:00] Policy is new and should be deployed: abac 1.0.7 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9688+00:00] Policy Is Allowed: abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9688+00:00] Validating properties data for policy: abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9688+00:00] Validating properties policy for policy: abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.9688+00:00] Validation successful for policy: abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.9690+00:00] Directory created: /opt/policies/abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.9691+00:00] Policy file saved: /opt/policies/abac/policy.rego 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.9692+00:00] Directory created: /opt/data/node/abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:45.9693+00:00] Data file saved: /opt/data/node/abac/data.json 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9693+00:00] Before calling combinedoutput 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:45.9967+00:00] Bundle Built Sucessfully.... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0027+00:00] storage not found creating : /node/abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:46.0030+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.abac" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "abac" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "abac", 15:01:14 policy-opa-pdp | "policy-version": "1.0.7" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0030+00:00] Loaded Policy: abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:46.0031+00:00] Processed policies_to_be_deployed successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:46.0031+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | 2025/06/16 14:59:46 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0033+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"967b7747-4de9-4350-a28c-de20927bd02f","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"34e1f600-4598-41cc-b885-6ca5c41e05ef","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085986003","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T14:59:46.0033+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0033+00:00] 0 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0111+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"967b7747-4de9-4350-a28c-de20927bd02f","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"34e1f600-4598-41cc-b885-6ca5c41e05ef","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085986003","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0117+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:46.0117+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | 2025/06/16 14:59:52 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:52.4846+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"78f5e458-6c6f-4aff-a568-401e9f20ff47","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085992484","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:52.4853+00:00] Sending Heartbeat ... 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:52.4943+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"78f5e458-6c6f-4aff-a568-401e9f20ff47","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085992484","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:52.4945+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T14:59:52.4945+00:00] discarding event of type PDP_STATUS 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.0456+00:00] PDP received a request to get data through API 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0467+00:00] datapath to get Data : /node/abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0468+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0594+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0596+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0599+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0599+00:00] SDK making a decision 15:01:14 policy-opa-pdp | {"decision_id":"6d91ea38-73c7-41f0-9f69-377e6ea59eb8","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"a5cc5313-c8d9-4203-bac1-aa4aec533000","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1110,"timer_rego_query_compile_ns":168142,"timer_rego_query_eval_ns":1010151,"timer_rego_query_parse_ns":155672,"timer_sdk_decision_eval_ns":1543808},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-16T15:00:10Z","timestamp":"2025-06-16T15:00:10.060057897Z","type":"openpolicyagent.org/decision_logs"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0622+00:00] RAW opa Decision output: 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "ID": "6d91ea38-73c7-41f0-9f69-377e6ea59eb8", 15:01:14 policy-opa-pdp | "Result": { 15:01:14 policy-opa-pdp | "action_is_read": true, 15:01:14 policy-opa-pdp | "allow": true, 15:01:14 policy-opa-pdp | "viewable_sensor_data": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Galle", 15:01:14 policy-opa-pdp | "precipitation": "500 mm", 15:01:14 policy-opa-pdp | "temperature": "35 C", 15:01:14 policy-opa-pdp | "windspeed": "7.2 m/s" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Jaffna", 15:01:14 policy-opa-pdp | "precipitation": "300 mm", 15:01:14 policy-opa-pdp | "temperature": "-5 C", 15:01:14 policy-opa-pdp | "windspeed": "3.8 m/s" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Nuwara Eliya", 15:01:14 policy-opa-pdp | "precipitation": "600 mm", 15:01:14 policy-opa-pdp | "temperature": "25 C", 15:01:14 policy-opa-pdp | "windspeed": "4.0 m/s" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Trincomalee", 15:01:14 policy-opa-pdp | "precipitation": "1000 mm", 15:01:14 policy-opa-pdp | "temperature": "20 C", 15:01:14 policy-opa-pdp | "windspeed": "5.0 m/s" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | "Provenance": { 15:01:14 policy-opa-pdp | "version": "1.1.0", 15:01:14 policy-opa-pdp | "build_commit": "", 15:01:14 policy-opa-pdp | "build_timestamp": "", 15:01:14 policy-opa-pdp | "build_hostname": "" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0743+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0745+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0749+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | WARN[2025-06-16T15:00:10.0751+00:00] Policy Name abc does not exist 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0837+00:00] PDP received a decision request. 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0838+00:00] Headers processed for requestId: Unknown 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0842+00:00] Validation successful for request fields 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0843+00:00] SDK making a decision 15:01:14 policy-opa-pdp | {"decision_id":"e3c8d5df-7b11-42f9-9d3e-59f251ca91aa","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"a5cc5313-c8d9-4203-bac1-aa4aec533000","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1060,"timer_rego_query_eval_ns":994492,"timer_sdk_decision_eval_ns":1200804},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-16T15:00:10Z","timestamp":"2025-06-16T15:00:10.084564399Z","type":"openpolicyagent.org/decision_logs"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.0862+00:00] RAW opa Decision output: 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "ID": "e3c8d5df-7b11-42f9-9d3e-59f251ca91aa", 15:01:14 policy-opa-pdp | "Result": { 15:01:14 policy-opa-pdp | "action_is_read": true, 15:01:14 policy-opa-pdp | "allow": true, 15:01:14 policy-opa-pdp | "viewable_sensor_data": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Galle", 15:01:14 policy-opa-pdp | "precipitation": "500 mm", 15:01:14 policy-opa-pdp | "temperature": "35 C", 15:01:14 policy-opa-pdp | "windspeed": "7.2 m/s" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Jaffna", 15:01:14 policy-opa-pdp | "precipitation": "300 mm", 15:01:14 policy-opa-pdp | "temperature": "-5 C", 15:01:14 policy-opa-pdp | "windspeed": "3.8 m/s" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Nuwara Eliya", 15:01:14 policy-opa-pdp | "precipitation": "600 mm", 15:01:14 policy-opa-pdp | "temperature": "25 C", 15:01:14 policy-opa-pdp | "windspeed": "4.0 m/s" 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "location": "Trincomalee", 15:01:14 policy-opa-pdp | "precipitation": "1000 mm", 15:01:14 policy-opa-pdp | "temperature": "20 C", 15:01:14 policy-opa-pdp | "windspeed": "5.0 m/s" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | }, 15:01:14 policy-opa-pdp | "Provenance": { 15:01:14 policy-opa-pdp | "version": "1.1.0", 15:01:14 policy-opa-pdp | "build_commit": "", 15:01:14 policy-opa-pdp | "build_timestamp": "", 15:01:14 policy-opa-pdp | "build_hostname": "" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6751+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","timestampMs":1750086010653,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6752+00:00] messageType: PDP_UPDATE 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6754+00:00] PDP_UPDATE Message received: {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","timestampMs":1750086010653,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.6755+00:00] Found Policies to be undeployed 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.6755+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6756+00:00] Deleting Policy from OPA : /abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6789+00:00] Removing policy directory: /opt/policies/abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6793+00:00] Deleting data from OPA : /node/abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6793+00:00] Analyzing dataPath: /node/abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6794+00:00] Path segments: [ node abac] 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6794+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6794+00:00] Removing data directory: /opt/data/node/abac 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.6798+00:00] PoliciesDeployed Map: { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6798+00:00] Policies Map After Undeployment : { 15:01:14 policy-opa-pdp | "deployed_policies_dict": [ 15:01:14 policy-opa-pdp | { 15:01:14 policy-opa-pdp | "data": [ 15:01:14 policy-opa-pdp | "node.slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy": [ 15:01:14 policy-opa-pdp | "slice.capacity.check" 15:01:14 policy-opa-pdp | ], 15:01:14 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:01:14 policy-opa-pdp | "policy-version": "1.0.0" 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | ] 15:01:14 policy-opa-pdp | } 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.6798+00:00] Processed policies_to_be_undeployed successfully 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.6799+00:00] Sending PDP Status With Update Response 15:01:14 policy-opa-pdp | 2025/06/16 15:00:10 KafkaProducer or producer produce message 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6802+00:00] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c240ed42-7228-4262-8309-afbd26eea197","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750086010679","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | INFO[2025-06-16T15:00:10.6802+00:00] PDP_STATUS Message Sent Successfully 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6803+00:00] 0 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6893+00:00] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c240ed42-7228-4262-8309-afbd26eea197","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750086010679","deploymentInstanceInfo":""} 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6893+00:00] messageType: PDP_STATUS 15:01:14 policy-opa-pdp | DEBU[2025-06-16T15:00:10.6893+00:00] discarding event of type PDP_STATUS 15:01:14 policy-pap | Waiting for api port 6969... 15:01:14 policy-pap | api (172.17.0.8:6969) open 15:01:14 policy-pap | Waiting for kafka port 9092... 15:01:14 policy-pap | kafka (172.17.0.6:9092) open 15:01:14 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 15:01:14 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 15:01:14 policy-pap | 15:01:14 policy-pap | . ____ _ __ _ _ 15:01:14 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 15:01:14 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 15:01:14 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 15:01:14 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 15:01:14 policy-pap | =========|_|==============|___/=/_/_/_/ 15:01:14 policy-pap | 15:01:14 policy-pap | :: Spring Boot :: (v3.4.6) 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:46.379+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 87 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 15:01:14 policy-pap | [2025-06-16T14:55:46.380+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 15:01:14 policy-pap | [2025-06-16T14:55:47.978+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 15:01:14 policy-pap | [2025-06-16T14:55:48.080+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 89 ms. Found 7 JPA repository interfaces. 15:01:14 policy-pap | [2025-06-16T14:55:49.201+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 15:01:14 policy-pap | [2025-06-16T14:55:49.219+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 15:01:14 policy-pap | [2025-06-16T14:55:49.221+00:00|INFO|StandardService|main] Starting service [Tomcat] 15:01:14 policy-pap | [2025-06-16T14:55:49.221+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 15:01:14 policy-pap | [2025-06-16T14:55:49.283+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 15:01:14 policy-pap | [2025-06-16T14:55:49.284+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2848 ms 15:01:14 policy-pap | [2025-06-16T14:55:49.776+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 15:01:14 policy-pap | [2025-06-16T14:55:49.864+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 15:01:14 policy-pap | [2025-06-16T14:55:49.915+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 15:01:14 policy-pap | [2025-06-16T14:55:50.407+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 15:01:14 policy-pap | [2025-06-16T14:55:50.462+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 15:01:14 policy-pap | [2025-06-16T14:55:50.705+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd 15:01:14 policy-pap | [2025-06-16T14:55:50.708+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 15:01:14 policy-pap | [2025-06-16T14:55:50.825+00:00|INFO|pooling|main] HHH10001005: Database info: 15:01:14 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 15:01:14 policy-pap | Database driver: undefined/unknown 15:01:14 policy-pap | Database version: 16.4 15:01:14 policy-pap | Autocommit mode: undefined/unknown 15:01:14 policy-pap | Isolation level: undefined/unknown 15:01:14 policy-pap | Minimum pool size: undefined/unknown 15:01:14 policy-pap | Maximum pool size: undefined/unknown 15:01:14 policy-pap | [2025-06-16T14:55:52.969+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 15:01:14 policy-pap | [2025-06-16T14:55:52.973+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 15:01:14 policy-pap | [2025-06-16T14:55:54.346+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:01:14 policy-pap | allow.auto.create.topics = true 15:01:14 policy-pap | auto.commit.interval.ms = 5000 15:01:14 policy-pap | auto.include.jmx.reporter = true 15:01:14 policy-pap | auto.offset.reset = latest 15:01:14 policy-pap | bootstrap.servers = [kafka:9092] 15:01:14 policy-pap | check.crcs = true 15:01:14 policy-pap | client.dns.lookup = use_all_dns_ips 15:01:14 policy-pap | client.id = consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-1 15:01:14 policy-pap | client.rack = 15:01:14 policy-pap | connections.max.idle.ms = 540000 15:01:14 policy-pap | default.api.timeout.ms = 60000 15:01:14 policy-pap | enable.auto.commit = true 15:01:14 policy-pap | enable.metrics.push = true 15:01:14 policy-pap | exclude.internal.topics = true 15:01:14 policy-pap | fetch.max.bytes = 52428800 15:01:14 policy-pap | fetch.max.wait.ms = 500 15:01:14 policy-pap | fetch.min.bytes = 1 15:01:14 policy-pap | group.id = 477ccfe3-c295-43ff-8034-7aaaa0b17546 15:01:14 policy-pap | group.instance.id = null 15:01:14 policy-pap | group.protocol = classic 15:01:14 policy-pap | group.remote.assignor = null 15:01:14 policy-pap | heartbeat.interval.ms = 3000 15:01:14 policy-pap | interceptor.classes = [] 15:01:14 policy-pap | internal.leave.group.on.close = true 15:01:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:01:14 policy-pap | isolation.level = read_uncommitted 15:01:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | max.partition.fetch.bytes = 1048576 15:01:14 policy-pap | max.poll.interval.ms = 300000 15:01:14 policy-pap | max.poll.records = 500 15:01:14 policy-pap | metadata.max.age.ms = 300000 15:01:14 policy-pap | metadata.recovery.strategy = none 15:01:14 policy-pap | metric.reporters = [] 15:01:14 policy-pap | metrics.num.samples = 2 15:01:14 policy-pap | metrics.recording.level = INFO 15:01:14 policy-pap | metrics.sample.window.ms = 30000 15:01:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:01:14 policy-pap | receive.buffer.bytes = 65536 15:01:14 policy-pap | reconnect.backoff.max.ms = 1000 15:01:14 policy-pap | reconnect.backoff.ms = 50 15:01:14 policy-pap | request.timeout.ms = 30000 15:01:14 policy-pap | retry.backoff.max.ms = 1000 15:01:14 policy-pap | retry.backoff.ms = 100 15:01:14 policy-pap | sasl.client.callback.handler.class = null 15:01:14 policy-pap | sasl.jaas.config = null 15:01:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:01:14 policy-pap | sasl.kerberos.service.name = null 15:01:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:14 policy-pap | sasl.login.callback.handler.class = null 15:01:14 policy-pap | sasl.login.class = null 15:01:14 policy-pap | sasl.login.connect.timeout.ms = null 15:01:14 policy-pap | sasl.login.read.timeout.ms = null 15:01:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:01:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:01:14 policy-pap | sasl.login.refresh.window.factor = 0.8 15:01:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:01:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.login.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.mechanism = GSSAPI 15:01:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:01:14 policy-pap | sasl.oauthbearer.expected.audience = null 15:01:14 policy-pap | sasl.oauthbearer.expected.issuer = null 15:01:14 policy-pap | sasl.oauthbearer.header.urlencode = false 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:01:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:01:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:01:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:01:14 policy-pap | security.protocol = PLAINTEXT 15:01:14 policy-pap | security.providers = null 15:01:14 policy-pap | send.buffer.bytes = 131072 15:01:14 policy-pap | session.timeout.ms = 45000 15:01:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:01:14 policy-pap | socket.connection.setup.timeout.ms = 10000 15:01:14 policy-pap | ssl.cipher.suites = null 15:01:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:14 policy-pap | ssl.endpoint.identification.algorithm = https 15:01:14 policy-pap | ssl.engine.factory.class = null 15:01:14 policy-pap | ssl.key.password = null 15:01:14 policy-pap | ssl.keymanager.algorithm = SunX509 15:01:14 policy-pap | ssl.keystore.certificate.chain = null 15:01:14 policy-pap | ssl.keystore.key = null 15:01:14 policy-pap | ssl.keystore.location = null 15:01:14 policy-pap | ssl.keystore.password = null 15:01:14 policy-pap | ssl.keystore.type = JKS 15:01:14 policy-pap | ssl.protocol = TLSv1.3 15:01:14 policy-pap | ssl.provider = null 15:01:14 policy-pap | ssl.secure.random.implementation = null 15:01:14 policy-pap | ssl.trustmanager.algorithm = PKIX 15:01:14 policy-pap | ssl.truststore.certificates = null 15:01:14 policy-pap | ssl.truststore.location = null 15:01:14 policy-pap | ssl.truststore.password = null 15:01:14 policy-pap | ssl.truststore.type = JKS 15:01:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:54.410+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:01:14 policy-pap | [2025-06-16T14:55:54.571+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:01:14 policy-pap | [2025-06-16T14:55:54.571+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:01:14 policy-pap | [2025-06-16T14:55:54.571+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085754569 15:01:14 policy-pap | [2025-06-16T14:55:54.574+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-1, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Subscribed to topic(s): policy-pdp-pap 15:01:14 policy-pap | [2025-06-16T14:55:54.575+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:01:14 policy-pap | allow.auto.create.topics = true 15:01:14 policy-pap | auto.commit.interval.ms = 5000 15:01:14 policy-pap | auto.include.jmx.reporter = true 15:01:14 policy-pap | auto.offset.reset = latest 15:01:14 policy-pap | bootstrap.servers = [kafka:9092] 15:01:14 policy-pap | check.crcs = true 15:01:14 policy-pap | client.dns.lookup = use_all_dns_ips 15:01:14 policy-pap | client.id = consumer-policy-pap-2 15:01:14 policy-pap | client.rack = 15:01:14 policy-pap | connections.max.idle.ms = 540000 15:01:14 policy-pap | default.api.timeout.ms = 60000 15:01:14 policy-pap | enable.auto.commit = true 15:01:14 policy-pap | enable.metrics.push = true 15:01:14 policy-pap | exclude.internal.topics = true 15:01:14 policy-pap | fetch.max.bytes = 52428800 15:01:14 policy-pap | fetch.max.wait.ms = 500 15:01:14 policy-pap | fetch.min.bytes = 1 15:01:14 policy-pap | group.id = policy-pap 15:01:14 policy-pap | group.instance.id = null 15:01:14 policy-pap | group.protocol = classic 15:01:14 policy-pap | group.remote.assignor = null 15:01:14 policy-pap | heartbeat.interval.ms = 3000 15:01:14 policy-pap | interceptor.classes = [] 15:01:14 policy-pap | internal.leave.group.on.close = true 15:01:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:01:14 policy-pap | isolation.level = read_uncommitted 15:01:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | max.partition.fetch.bytes = 1048576 15:01:14 policy-pap | max.poll.interval.ms = 300000 15:01:14 policy-pap | max.poll.records = 500 15:01:14 policy-pap | metadata.max.age.ms = 300000 15:01:14 policy-pap | metadata.recovery.strategy = none 15:01:14 policy-pap | metric.reporters = [] 15:01:14 policy-pap | metrics.num.samples = 2 15:01:14 policy-pap | metrics.recording.level = INFO 15:01:14 policy-pap | metrics.sample.window.ms = 30000 15:01:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:01:14 policy-pap | receive.buffer.bytes = 65536 15:01:14 policy-pap | reconnect.backoff.max.ms = 1000 15:01:14 policy-pap | reconnect.backoff.ms = 50 15:01:14 policy-pap | request.timeout.ms = 30000 15:01:14 policy-pap | retry.backoff.max.ms = 1000 15:01:14 policy-pap | retry.backoff.ms = 100 15:01:14 policy-pap | sasl.client.callback.handler.class = null 15:01:14 policy-pap | sasl.jaas.config = null 15:01:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:01:14 policy-pap | sasl.kerberos.service.name = null 15:01:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:14 policy-pap | sasl.login.callback.handler.class = null 15:01:14 policy-pap | sasl.login.class = null 15:01:14 policy-pap | sasl.login.connect.timeout.ms = null 15:01:14 policy-pap | sasl.login.read.timeout.ms = null 15:01:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:01:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:01:14 policy-pap | sasl.login.refresh.window.factor = 0.8 15:01:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:01:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.login.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.mechanism = GSSAPI 15:01:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:01:14 policy-pap | sasl.oauthbearer.expected.audience = null 15:01:14 policy-pap | sasl.oauthbearer.expected.issuer = null 15:01:14 policy-pap | sasl.oauthbearer.header.urlencode = false 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:01:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:01:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:01:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:01:14 policy-pap | security.protocol = PLAINTEXT 15:01:14 policy-pap | security.providers = null 15:01:14 policy-pap | send.buffer.bytes = 131072 15:01:14 policy-pap | session.timeout.ms = 45000 15:01:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:01:14 policy-pap | socket.connection.setup.timeout.ms = 10000 15:01:14 policy-pap | ssl.cipher.suites = null 15:01:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:14 policy-pap | ssl.endpoint.identification.algorithm = https 15:01:14 policy-pap | ssl.engine.factory.class = null 15:01:14 policy-pap | ssl.key.password = null 15:01:14 policy-pap | ssl.keymanager.algorithm = SunX509 15:01:14 policy-pap | ssl.keystore.certificate.chain = null 15:01:14 policy-pap | ssl.keystore.key = null 15:01:14 policy-pap | ssl.keystore.location = null 15:01:14 policy-pap | ssl.keystore.password = null 15:01:14 policy-pap | ssl.keystore.type = JKS 15:01:14 policy-pap | ssl.protocol = TLSv1.3 15:01:14 policy-pap | ssl.provider = null 15:01:14 policy-pap | ssl.secure.random.implementation = null 15:01:14 policy-pap | ssl.trustmanager.algorithm = PKIX 15:01:14 policy-pap | ssl.truststore.certificates = null 15:01:14 policy-pap | ssl.truststore.location = null 15:01:14 policy-pap | ssl.truststore.password = null 15:01:14 policy-pap | ssl.truststore.type = JKS 15:01:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:54.575+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:01:14 policy-pap | [2025-06-16T14:55:54.583+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:01:14 policy-pap | [2025-06-16T14:55:54.584+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:01:14 policy-pap | [2025-06-16T14:55:54.584+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085754583 15:01:14 policy-pap | [2025-06-16T14:55:54.584+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 15:01:14 policy-pap | [2025-06-16T14:55:54.987+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 15:01:14 policy-pap | [2025-06-16T14:55:55.138+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 15:01:14 policy-pap | [2025-06-16T14:55:55.239+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 15:01:14 policy-pap | [2025-06-16T14:55:55.474+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 15:01:14 policy-pap | [2025-06-16T14:55:56.289+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 15:01:14 policy-pap | [2025-06-16T14:55:56.411+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 15:01:14 policy-pap | [2025-06-16T14:55:56.433+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 15:01:14 policy-pap | [2025-06-16T14:55:56.455+00:00|INFO|ServiceManager|main] Policy PAP starting 15:01:14 policy-pap | [2025-06-16T14:55:56.456+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 15:01:14 policy-pap | [2025-06-16T14:55:56.457+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 15:01:14 policy-pap | [2025-06-16T14:55:56.458+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 15:01:14 policy-pap | [2025-06-16T14:55:56.458+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 15:01:14 policy-pap | [2025-06-16T14:55:56.458+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 15:01:14 policy-pap | [2025-06-16T14:55:56.458+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 15:01:14 policy-pap | [2025-06-16T14:55:56.460+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=477ccfe3-c295-43ff-8034-7aaaa0b17546, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@198ad9e0 15:01:14 policy-pap | [2025-06-16T14:55:56.474+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=477ccfe3-c295-43ff-8034-7aaaa0b17546, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:01:14 policy-pap | [2025-06-16T14:55:56.475+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:01:14 policy-pap | allow.auto.create.topics = true 15:01:14 policy-pap | auto.commit.interval.ms = 5000 15:01:14 policy-pap | auto.include.jmx.reporter = true 15:01:14 policy-pap | auto.offset.reset = latest 15:01:14 policy-pap | bootstrap.servers = [kafka:9092] 15:01:14 policy-pap | check.crcs = true 15:01:14 policy-pap | client.dns.lookup = use_all_dns_ips 15:01:14 policy-pap | client.id = consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3 15:01:14 policy-pap | client.rack = 15:01:14 policy-pap | connections.max.idle.ms = 540000 15:01:14 policy-pap | default.api.timeout.ms = 60000 15:01:14 policy-pap | enable.auto.commit = true 15:01:14 policy-pap | enable.metrics.push = true 15:01:14 policy-pap | exclude.internal.topics = true 15:01:14 policy-pap | fetch.max.bytes = 52428800 15:01:14 policy-pap | fetch.max.wait.ms = 500 15:01:14 policy-pap | fetch.min.bytes = 1 15:01:14 policy-pap | group.id = 477ccfe3-c295-43ff-8034-7aaaa0b17546 15:01:14 policy-pap | group.instance.id = null 15:01:14 policy-pap | group.protocol = classic 15:01:14 policy-pap | group.remote.assignor = null 15:01:14 policy-pap | heartbeat.interval.ms = 3000 15:01:14 policy-pap | interceptor.classes = [] 15:01:14 policy-pap | internal.leave.group.on.close = true 15:01:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:01:14 policy-pap | isolation.level = read_uncommitted 15:01:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | max.partition.fetch.bytes = 1048576 15:01:14 policy-pap | max.poll.interval.ms = 300000 15:01:14 policy-pap | max.poll.records = 500 15:01:14 policy-pap | metadata.max.age.ms = 300000 15:01:14 policy-pap | metadata.recovery.strategy = none 15:01:14 policy-pap | metric.reporters = [] 15:01:14 policy-pap | metrics.num.samples = 2 15:01:14 policy-pap | metrics.recording.level = INFO 15:01:14 policy-pap | metrics.sample.window.ms = 30000 15:01:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:01:14 policy-pap | receive.buffer.bytes = 65536 15:01:14 policy-pap | reconnect.backoff.max.ms = 1000 15:01:14 policy-pap | reconnect.backoff.ms = 50 15:01:14 policy-pap | request.timeout.ms = 30000 15:01:14 policy-pap | retry.backoff.max.ms = 1000 15:01:14 policy-pap | retry.backoff.ms = 100 15:01:14 policy-pap | sasl.client.callback.handler.class = null 15:01:14 policy-pap | sasl.jaas.config = null 15:01:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:01:14 policy-pap | sasl.kerberos.service.name = null 15:01:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:14 policy-pap | sasl.login.callback.handler.class = null 15:01:14 policy-pap | sasl.login.class = null 15:01:14 policy-pap | sasl.login.connect.timeout.ms = null 15:01:14 policy-pap | sasl.login.read.timeout.ms = null 15:01:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:01:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:01:14 policy-pap | sasl.login.refresh.window.factor = 0.8 15:01:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:01:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.login.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.mechanism = GSSAPI 15:01:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:01:14 policy-pap | sasl.oauthbearer.expected.audience = null 15:01:14 policy-pap | sasl.oauthbearer.expected.issuer = null 15:01:14 policy-pap | sasl.oauthbearer.header.urlencode = false 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:01:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:01:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:01:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:01:14 policy-pap | security.protocol = PLAINTEXT 15:01:14 policy-pap | security.providers = null 15:01:14 policy-pap | send.buffer.bytes = 131072 15:01:14 policy-pap | session.timeout.ms = 45000 15:01:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:01:14 policy-pap | socket.connection.setup.timeout.ms = 10000 15:01:14 policy-pap | ssl.cipher.suites = null 15:01:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:14 policy-pap | ssl.endpoint.identification.algorithm = https 15:01:14 policy-pap | ssl.engine.factory.class = null 15:01:14 policy-pap | ssl.key.password = null 15:01:14 policy-pap | ssl.keymanager.algorithm = SunX509 15:01:14 policy-pap | ssl.keystore.certificate.chain = null 15:01:14 policy-pap | ssl.keystore.key = null 15:01:14 policy-pap | ssl.keystore.location = null 15:01:14 policy-pap | ssl.keystore.password = null 15:01:14 policy-pap | ssl.keystore.type = JKS 15:01:14 policy-pap | ssl.protocol = TLSv1.3 15:01:14 policy-pap | ssl.provider = null 15:01:14 policy-pap | ssl.secure.random.implementation = null 15:01:14 policy-pap | ssl.trustmanager.algorithm = PKIX 15:01:14 policy-pap | ssl.truststore.certificates = null 15:01:14 policy-pap | ssl.truststore.location = null 15:01:14 policy-pap | ssl.truststore.password = null 15:01:14 policy-pap | ssl.truststore.type = JKS 15:01:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:56.476+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:01:14 policy-pap | [2025-06-16T14:55:56.483+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:01:14 policy-pap | [2025-06-16T14:55:56.483+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:01:14 policy-pap | [2025-06-16T14:55:56.483+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085756483 15:01:14 policy-pap | [2025-06-16T14:55:56.484+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Subscribed to topic(s): policy-pdp-pap 15:01:14 policy-pap | [2025-06-16T14:55:56.485+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 15:01:14 policy-pap | [2025-06-16T14:55:56.485+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=173e64e7-0b21-4e96-9e8b-9121badea817, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@64123c4d 15:01:14 policy-pap | [2025-06-16T14:55:56.485+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=173e64e7-0b21-4e96-9e8b-9121badea817, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:01:14 policy-pap | [2025-06-16T14:55:56.485+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:01:14 policy-pap | allow.auto.create.topics = true 15:01:14 policy-pap | auto.commit.interval.ms = 5000 15:01:14 policy-pap | auto.include.jmx.reporter = true 15:01:14 policy-pap | auto.offset.reset = latest 15:01:14 policy-pap | bootstrap.servers = [kafka:9092] 15:01:14 policy-pap | check.crcs = true 15:01:14 policy-pap | client.dns.lookup = use_all_dns_ips 15:01:14 policy-pap | client.id = consumer-policy-pap-4 15:01:14 policy-pap | client.rack = 15:01:14 policy-pap | connections.max.idle.ms = 540000 15:01:14 policy-pap | default.api.timeout.ms = 60000 15:01:14 policy-pap | enable.auto.commit = true 15:01:14 policy-pap | enable.metrics.push = true 15:01:14 policy-pap | exclude.internal.topics = true 15:01:14 policy-pap | fetch.max.bytes = 52428800 15:01:14 policy-pap | fetch.max.wait.ms = 500 15:01:14 policy-pap | fetch.min.bytes = 1 15:01:14 policy-pap | group.id = policy-pap 15:01:14 policy-pap | group.instance.id = null 15:01:14 policy-pap | group.protocol = classic 15:01:14 policy-pap | group.remote.assignor = null 15:01:14 policy-pap | heartbeat.interval.ms = 3000 15:01:14 policy-pap | interceptor.classes = [] 15:01:14 policy-pap | internal.leave.group.on.close = true 15:01:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:01:14 policy-pap | isolation.level = read_uncommitted 15:01:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | max.partition.fetch.bytes = 1048576 15:01:14 policy-pap | max.poll.interval.ms = 300000 15:01:14 policy-pap | max.poll.records = 500 15:01:14 policy-pap | metadata.max.age.ms = 300000 15:01:14 policy-pap | metadata.recovery.strategy = none 15:01:14 policy-pap | metric.reporters = [] 15:01:14 policy-pap | metrics.num.samples = 2 15:01:14 policy-pap | metrics.recording.level = INFO 15:01:14 policy-pap | metrics.sample.window.ms = 30000 15:01:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:01:14 policy-pap | receive.buffer.bytes = 65536 15:01:14 policy-pap | reconnect.backoff.max.ms = 1000 15:01:14 policy-pap | reconnect.backoff.ms = 50 15:01:14 policy-pap | request.timeout.ms = 30000 15:01:14 policy-pap | retry.backoff.max.ms = 1000 15:01:14 policy-pap | retry.backoff.ms = 100 15:01:14 policy-pap | sasl.client.callback.handler.class = null 15:01:14 policy-pap | sasl.jaas.config = null 15:01:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:01:14 policy-pap | sasl.kerberos.service.name = null 15:01:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:14 policy-pap | sasl.login.callback.handler.class = null 15:01:14 policy-pap | sasl.login.class = null 15:01:14 policy-pap | sasl.login.connect.timeout.ms = null 15:01:14 policy-pap | sasl.login.read.timeout.ms = null 15:01:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:01:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:01:14 policy-pap | sasl.login.refresh.window.factor = 0.8 15:01:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:01:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.login.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.mechanism = GSSAPI 15:01:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:01:14 policy-pap | sasl.oauthbearer.expected.audience = null 15:01:14 policy-pap | sasl.oauthbearer.expected.issuer = null 15:01:14 policy-pap | sasl.oauthbearer.header.urlencode = false 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:01:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:01:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:01:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:01:14 policy-pap | security.protocol = PLAINTEXT 15:01:14 policy-pap | security.providers = null 15:01:14 policy-pap | send.buffer.bytes = 131072 15:01:14 policy-pap | session.timeout.ms = 45000 15:01:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:01:14 policy-pap | socket.connection.setup.timeout.ms = 10000 15:01:14 policy-pap | ssl.cipher.suites = null 15:01:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:14 policy-pap | ssl.endpoint.identification.algorithm = https 15:01:14 policy-pap | ssl.engine.factory.class = null 15:01:14 policy-pap | ssl.key.password = null 15:01:14 policy-pap | ssl.keymanager.algorithm = SunX509 15:01:14 policy-pap | ssl.keystore.certificate.chain = null 15:01:14 policy-pap | ssl.keystore.key = null 15:01:14 policy-pap | ssl.keystore.location = null 15:01:14 policy-pap | ssl.keystore.password = null 15:01:14 policy-pap | ssl.keystore.type = JKS 15:01:14 policy-pap | ssl.protocol = TLSv1.3 15:01:14 policy-pap | ssl.provider = null 15:01:14 policy-pap | ssl.secure.random.implementation = null 15:01:14 policy-pap | ssl.trustmanager.algorithm = PKIX 15:01:14 policy-pap | ssl.truststore.certificates = null 15:01:14 policy-pap | ssl.truststore.location = null 15:01:14 policy-pap | ssl.truststore.password = null 15:01:14 policy-pap | ssl.truststore.type = JKS 15:01:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:56.485+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:01:14 policy-pap | [2025-06-16T14:55:56.491+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:01:14 policy-pap | [2025-06-16T14:55:56.491+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:01:14 policy-pap | [2025-06-16T14:55:56.491+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085756491 15:01:14 policy-pap | [2025-06-16T14:55:56.491+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 15:01:14 policy-pap | [2025-06-16T14:55:56.492+00:00|INFO|ServiceManager|main] Policy PAP starting topics 15:01:14 policy-pap | [2025-06-16T14:55:56.492+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=173e64e7-0b21-4e96-9e8b-9121badea817, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:01:14 policy-pap | [2025-06-16T14:55:56.492+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=477ccfe3-c295-43ff-8034-7aaaa0b17546, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:01:14 policy-pap | [2025-06-16T14:55:56.492+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5f242aa8-7098-4544-a09d-2221bd7029da, alive=false, publisher=null]]: starting 15:01:14 policy-pap | [2025-06-16T14:55:56.507+00:00|INFO|ProducerConfig|main] ProducerConfig values: 15:01:14 policy-pap | acks = -1 15:01:14 policy-pap | auto.include.jmx.reporter = true 15:01:14 policy-pap | batch.size = 16384 15:01:14 policy-pap | bootstrap.servers = [kafka:9092] 15:01:14 policy-pap | buffer.memory = 33554432 15:01:14 policy-pap | client.dns.lookup = use_all_dns_ips 15:01:14 policy-pap | client.id = producer-1 15:01:14 policy-pap | compression.gzip.level = -1 15:01:14 policy-pap | compression.lz4.level = 9 15:01:14 policy-pap | compression.type = none 15:01:14 policy-pap | compression.zstd.level = 3 15:01:14 policy-pap | connections.max.idle.ms = 540000 15:01:14 policy-pap | delivery.timeout.ms = 120000 15:01:14 policy-pap | enable.idempotence = true 15:01:14 policy-pap | enable.metrics.push = true 15:01:14 policy-pap | interceptor.classes = [] 15:01:14 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:01:14 policy-pap | linger.ms = 0 15:01:14 policy-pap | max.block.ms = 60000 15:01:14 policy-pap | max.in.flight.requests.per.connection = 5 15:01:14 policy-pap | max.request.size = 1048576 15:01:14 policy-pap | metadata.max.age.ms = 300000 15:01:14 policy-pap | metadata.max.idle.ms = 300000 15:01:14 policy-pap | metadata.recovery.strategy = none 15:01:14 policy-pap | metric.reporters = [] 15:01:14 policy-pap | metrics.num.samples = 2 15:01:14 policy-pap | metrics.recording.level = INFO 15:01:14 policy-pap | metrics.sample.window.ms = 30000 15:01:14 policy-pap | partitioner.adaptive.partitioning.enable = true 15:01:14 policy-pap | partitioner.availability.timeout.ms = 0 15:01:14 policy-pap | partitioner.class = null 15:01:14 policy-pap | partitioner.ignore.keys = false 15:01:14 policy-pap | receive.buffer.bytes = 32768 15:01:14 policy-pap | reconnect.backoff.max.ms = 1000 15:01:14 policy-pap | reconnect.backoff.ms = 50 15:01:14 policy-pap | request.timeout.ms = 30000 15:01:14 policy-pap | retries = 2147483647 15:01:14 policy-pap | retry.backoff.max.ms = 1000 15:01:14 policy-pap | retry.backoff.ms = 100 15:01:14 policy-pap | sasl.client.callback.handler.class = null 15:01:14 policy-pap | sasl.jaas.config = null 15:01:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:01:14 policy-pap | sasl.kerberos.service.name = null 15:01:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:14 policy-pap | sasl.login.callback.handler.class = null 15:01:14 policy-pap | sasl.login.class = null 15:01:14 policy-pap | sasl.login.connect.timeout.ms = null 15:01:14 policy-pap | sasl.login.read.timeout.ms = null 15:01:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:01:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:01:14 policy-pap | sasl.login.refresh.window.factor = 0.8 15:01:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:01:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.login.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.mechanism = GSSAPI 15:01:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:01:14 policy-pap | sasl.oauthbearer.expected.audience = null 15:01:14 policy-pap | sasl.oauthbearer.expected.issuer = null 15:01:14 policy-pap | sasl.oauthbearer.header.urlencode = false 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:01:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:01:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:01:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:01:14 policy-pap | security.protocol = PLAINTEXT 15:01:14 policy-pap | security.providers = null 15:01:14 policy-pap | send.buffer.bytes = 131072 15:01:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:01:14 policy-pap | socket.connection.setup.timeout.ms = 10000 15:01:14 policy-pap | ssl.cipher.suites = null 15:01:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:14 policy-pap | ssl.endpoint.identification.algorithm = https 15:01:14 policy-pap | ssl.engine.factory.class = null 15:01:14 policy-pap | ssl.key.password = null 15:01:14 policy-pap | ssl.keymanager.algorithm = SunX509 15:01:14 policy-pap | ssl.keystore.certificate.chain = null 15:01:14 policy-pap | ssl.keystore.key = null 15:01:14 policy-pap | ssl.keystore.location = null 15:01:14 policy-pap | ssl.keystore.password = null 15:01:14 policy-pap | ssl.keystore.type = JKS 15:01:14 policy-pap | ssl.protocol = TLSv1.3 15:01:14 policy-pap | ssl.provider = null 15:01:14 policy-pap | ssl.secure.random.implementation = null 15:01:14 policy-pap | ssl.trustmanager.algorithm = PKIX 15:01:14 policy-pap | ssl.truststore.certificates = null 15:01:14 policy-pap | ssl.truststore.location = null 15:01:14 policy-pap | ssl.truststore.password = null 15:01:14 policy-pap | ssl.truststore.type = JKS 15:01:14 policy-pap | transaction.timeout.ms = 60000 15:01:14 policy-pap | transactional.id = null 15:01:14 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:56.508+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:01:14 policy-pap | [2025-06-16T14:55:56.522+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 15:01:14 policy-pap | [2025-06-16T14:55:56.540+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:01:14 policy-pap | [2025-06-16T14:55:56.541+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:01:14 policy-pap | [2025-06-16T14:55:56.541+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085756540 15:01:14 policy-pap | [2025-06-16T14:55:56.541+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=5f242aa8-7098-4544-a09d-2221bd7029da, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 15:01:14 policy-pap | [2025-06-16T14:55:56.541+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=49f1a0dd-8029-4fd1-b5cf-93d1b40288e2, alive=false, publisher=null]]: starting 15:01:14 policy-pap | [2025-06-16T14:55:56.542+00:00|INFO|ProducerConfig|main] ProducerConfig values: 15:01:14 policy-pap | acks = -1 15:01:14 policy-pap | auto.include.jmx.reporter = true 15:01:14 policy-pap | batch.size = 16384 15:01:14 policy-pap | bootstrap.servers = [kafka:9092] 15:01:14 policy-pap | buffer.memory = 33554432 15:01:14 policy-pap | client.dns.lookup = use_all_dns_ips 15:01:14 policy-pap | client.id = producer-2 15:01:14 policy-pap | compression.gzip.level = -1 15:01:14 policy-pap | compression.lz4.level = 9 15:01:14 policy-pap | compression.type = none 15:01:14 policy-pap | compression.zstd.level = 3 15:01:14 policy-pap | connections.max.idle.ms = 540000 15:01:14 policy-pap | delivery.timeout.ms = 120000 15:01:14 policy-pap | enable.idempotence = true 15:01:14 policy-pap | enable.metrics.push = true 15:01:14 policy-pap | interceptor.classes = [] 15:01:14 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:01:14 policy-pap | linger.ms = 0 15:01:14 policy-pap | max.block.ms = 60000 15:01:14 policy-pap | max.in.flight.requests.per.connection = 5 15:01:14 policy-pap | max.request.size = 1048576 15:01:14 policy-pap | metadata.max.age.ms = 300000 15:01:14 policy-pap | metadata.max.idle.ms = 300000 15:01:14 policy-pap | metadata.recovery.strategy = none 15:01:14 policy-pap | metric.reporters = [] 15:01:14 policy-pap | metrics.num.samples = 2 15:01:14 policy-pap | metrics.recording.level = INFO 15:01:14 policy-pap | metrics.sample.window.ms = 30000 15:01:14 policy-pap | partitioner.adaptive.partitioning.enable = true 15:01:14 policy-pap | partitioner.availability.timeout.ms = 0 15:01:14 policy-pap | partitioner.class = null 15:01:14 policy-pap | partitioner.ignore.keys = false 15:01:14 policy-pap | receive.buffer.bytes = 32768 15:01:14 policy-pap | reconnect.backoff.max.ms = 1000 15:01:14 policy-pap | reconnect.backoff.ms = 50 15:01:14 policy-pap | request.timeout.ms = 30000 15:01:14 policy-pap | retries = 2147483647 15:01:14 policy-pap | retry.backoff.max.ms = 1000 15:01:14 policy-pap | retry.backoff.ms = 100 15:01:14 policy-pap | sasl.client.callback.handler.class = null 15:01:14 policy-pap | sasl.jaas.config = null 15:01:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:01:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:01:14 policy-pap | sasl.kerberos.service.name = null 15:01:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:01:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:01:14 policy-pap | sasl.login.callback.handler.class = null 15:01:14 policy-pap | sasl.login.class = null 15:01:14 policy-pap | sasl.login.connect.timeout.ms = null 15:01:14 policy-pap | sasl.login.read.timeout.ms = null 15:01:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:01:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:01:14 policy-pap | sasl.login.refresh.window.factor = 0.8 15:01:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:01:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.login.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.mechanism = GSSAPI 15:01:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:01:14 policy-pap | sasl.oauthbearer.expected.audience = null 15:01:14 policy-pap | sasl.oauthbearer.expected.issuer = null 15:01:14 policy-pap | sasl.oauthbearer.header.urlencode = false 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:01:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:01:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:01:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:01:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:01:14 policy-pap | security.protocol = PLAINTEXT 15:01:14 policy-pap | security.providers = null 15:01:14 policy-pap | send.buffer.bytes = 131072 15:01:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:01:14 policy-pap | socket.connection.setup.timeout.ms = 10000 15:01:14 policy-pap | ssl.cipher.suites = null 15:01:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:01:14 policy-pap | ssl.endpoint.identification.algorithm = https 15:01:14 policy-pap | ssl.engine.factory.class = null 15:01:14 policy-pap | ssl.key.password = null 15:01:14 policy-pap | ssl.keymanager.algorithm = SunX509 15:01:14 policy-pap | ssl.keystore.certificate.chain = null 15:01:14 policy-pap | ssl.keystore.key = null 15:01:14 policy-pap | ssl.keystore.location = null 15:01:14 policy-pap | ssl.keystore.password = null 15:01:14 policy-pap | ssl.keystore.type = JKS 15:01:14 policy-pap | ssl.protocol = TLSv1.3 15:01:14 policy-pap | ssl.provider = null 15:01:14 policy-pap | ssl.secure.random.implementation = null 15:01:14 policy-pap | ssl.trustmanager.algorithm = PKIX 15:01:14 policy-pap | ssl.truststore.certificates = null 15:01:14 policy-pap | ssl.truststore.location = null 15:01:14 policy-pap | ssl.truststore.password = null 15:01:14 policy-pap | ssl.truststore.type = JKS 15:01:14 policy-pap | transaction.timeout.ms = 60000 15:01:14 policy-pap | transactional.id = null 15:01:14 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:01:14 policy-pap | 15:01:14 policy-pap | [2025-06-16T14:55:56.542+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:01:14 policy-pap | [2025-06-16T14:55:56.543+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 15:01:14 policy-pap | [2025-06-16T14:55:56.548+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:01:14 policy-pap | [2025-06-16T14:55:56.548+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:01:14 policy-pap | [2025-06-16T14:55:56.548+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750085756548 15:01:14 policy-pap | [2025-06-16T14:55:56.548+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=49f1a0dd-8029-4fd1-b5cf-93d1b40288e2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 15:01:14 policy-pap | [2025-06-16T14:55:56.548+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 15:01:14 policy-pap | [2025-06-16T14:55:56.549+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 15:01:14 policy-pap | [2025-06-16T14:55:56.550+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 15:01:14 policy-pap | [2025-06-16T14:55:56.551+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 15:01:14 policy-pap | [2025-06-16T14:55:56.557+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 15:01:14 policy-pap | [2025-06-16T14:55:56.558+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 15:01:14 policy-pap | [2025-06-16T14:55:56.558+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 15:01:14 policy-pap | [2025-06-16T14:55:56.559+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 15:01:14 policy-pap | [2025-06-16T14:55:56.560+00:00|INFO|TimerManager|Thread-9] timer manager update started 15:01:14 policy-pap | [2025-06-16T14:55:56.562+00:00|INFO|ServiceManager|main] Policy PAP started 15:01:14 policy-pap | [2025-06-16T14:55:56.563+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.046 seconds (process running for 11.676) 15:01:14 policy-pap | [2025-06-16T14:55:56.562+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 15:01:14 policy-pap | [2025-06-16T14:55:57.028+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 15:01:14 policy-pap | [2025-06-16T14:55:57.029+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Cluster ID: Wfl8AkZLQj6X2gSeQGSSIQ 15:01:14 policy-pap | [2025-06-16T14:55:57.029+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Wfl8AkZLQj6X2gSeQGSSIQ 15:01:14 policy-pap | [2025-06-16T14:55:57.030+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Wfl8AkZLQj6X2gSeQGSSIQ 15:01:14 policy-pap | [2025-06-16T14:55:57.072+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 15:01:14 policy-pap | [2025-06-16T14:55:57.072+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 15:01:14 policy-pap | [2025-06-16T14:55:57.091+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:01:14 policy-pap | [2025-06-16T14:55:57.092+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Wfl8AkZLQj6X2gSeQGSSIQ 15:01:14 policy-pap | [2025-06-16T14:55:57.234+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 15:01:14 policy-pap | [2025-06-16T14:55:57.247+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:01:14 policy-pap | [2025-06-16T14:55:57.439+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:01:14 policy-pap | [2025-06-16T14:55:57.470+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:01:14 policy-pap | [2025-06-16T14:55:57.884+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 15:01:14 policy-pap | [2025-06-16T14:55:57.891+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 15:01:14 policy-pap | [2025-06-16T14:55:57.920+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31 15:01:14 policy-pap | [2025-06-16T14:55:57.921+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 15:01:14 policy-pap | [2025-06-16T14:55:57.933+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 15:01:14 policy-pap | [2025-06-16T14:55:57.936+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] (Re-)joining group 15:01:14 policy-pap | [2025-06-16T14:55:57.943+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Request joining group due to: need to re-join with the given member-id: consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee 15:01:14 policy-pap | [2025-06-16T14:55:57.944+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] (Re-)joining group 15:01:14 policy-pap | [2025-06-16T14:56:00.953+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31', protocol='range'} 15:01:14 policy-pap | [2025-06-16T14:56:00.958+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Successfully joined group with generation Generation{generationId=1, memberId='consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee', protocol='range'} 15:01:14 policy-pap | [2025-06-16T14:56:00.961+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31=Assignment(partitions=[policy-pdp-pap-0])} 15:01:14 policy-pap | [2025-06-16T14:56:00.961+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Finished assignment for group at generation 1: {consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee=Assignment(partitions=[policy-pdp-pap-0])} 15:01:14 policy-pap | [2025-06-16T14:56:01.024+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-b7433ea9-2bd8-40f9-a950-a1ec305b1c31', protocol='range'} 15:01:14 policy-pap | [2025-06-16T14:56:01.025+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 15:01:14 policy-pap | [2025-06-16T14:56:01.026+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Successfully synced group in generation Generation{generationId=1, memberId='consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3-6d299dab-b299-4571-9596-78f7691c5dee', protocol='range'} 15:01:14 policy-pap | [2025-06-16T14:56:01.026+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 15:01:14 policy-pap | [2025-06-16T14:56:01.034+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Adding newly assigned partitions: policy-pdp-pap-0 15:01:14 policy-pap | [2025-06-16T14:56:01.034+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 15:01:14 policy-pap | [2025-06-16T14:56:01.057+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 15:01:14 policy-pap | [2025-06-16T14:56:01.057+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Found no committed offset for partition policy-pdp-pap-0 15:01:14 policy-pap | [2025-06-16T14:56:01.088+00:00|INFO|SubscriptionState|kafka-coordinator-heartbeat-thread | policy-pap] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 15:01:14 policy-pap | [2025-06-16T14:56:01.088+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-477ccfe3-c295-43ff-8034-7aaaa0b17546-3, groupId=477ccfe3-c295-43ff-8034-7aaaa0b17546] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 15:01:14 policy-pap | [2025-06-16T14:56:41.614+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 15:01:14 policy-pap | [2025-06-16T14:56:41.614+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 15:01:14 policy-pap | [2025-06-16T14:56:41.617+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 15:01:14 policy-pap | [2025-06-16T14:57:51.820+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 15:01:14 policy-pap | [] 15:01:14 policy-pap | [2025-06-16T14:57:51.821+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6460956e-4a01-4c4a-bb69-b73593813620","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750085871774","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:51.822+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6460956e-4a01-4c4a-bb69-b73593813620","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750085871774","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:51.828+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 15:01:14 policy-pap | [2025-06-16T14:57:52.384+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:57:52.384+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:57:52.384+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:57:52.385+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=28474638-9707-4bbd-8aaa-789bc1608bfe, expireMs=1750085902385] 15:01:14 policy-pap | [2025-06-16T14:57:52.387+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:57:52.387+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:57:52.387+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=28474638-9707-4bbd-8aaa-789bc1608bfe, expireMs=1750085902385] 15:01:14 policy-pap | [2025-06-16T14:57:52.393+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"28474638-9707-4bbd-8aaa-789bc1608bfe","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.455+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"28474638-9707-4bbd-8aaa-789bc1608bfe","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.456+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:57:52.458+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"28474638-9707-4bbd-8aaa-789bc1608bfe","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.459+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:57:52.495+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"28474638-9707-4bbd-8aaa-789bc1608bfe","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"0b2d8de5-4382-46f5-be2d-0fc1a236eff5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872476","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:52.495+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"28474638-9707-4bbd-8aaa-789bc1608bfe","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"0b2d8de5-4382-46f5-be2d-0fc1a236eff5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872476","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:52.496+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 28474638-9707-4bbd-8aaa-789bc1608bfe 15:01:14 policy-pap | [2025-06-16T14:57:52.496+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:57:52.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:57:52.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:57:52.497+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=28474638-9707-4bbd-8aaa-789bc1608bfe, expireMs=1750085902385] 15:01:14 policy-pap | [2025-06-16T14:57:52.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:57:52.497+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c start publishing next request 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange starting 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange starting listener 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange starting timer 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a72fc544-387e-487c-a1ea-700fb123d178, expireMs=1750085902513] 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange starting enqueue 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange started 15:01:14 policy-pap | [2025-06-16T14:57:52.513+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=a72fc544-387e-487c-a1ea-700fb123d178, expireMs=1750085902513] 15:01:14 policy-pap | [2025-06-16T14:57:52.514+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:01:14 policy-pap | [2025-06-16T14:57:52.514+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a72fc544-387e-487c-a1ea-700fb123d178","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.528+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a72fc544-387e-487c-a1ea-700fb123d178","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.528+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 15:01:14 policy-pap | [2025-06-16T14:57:52.536+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a72fc544-387e-487c-a1ea-700fb123d178","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"92b067b6-f5ea-4d41-a00e-ac6e59e5195a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872523","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:52.539+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a72fc544-387e-487c-a1ea-700fb123d178 15:01:14 policy-pap | [2025-06-16T14:57:52.540+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 15:01:14 policy-pap | [2025-06-16T14:57:52.902+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a72fc544-387e-487c-a1ea-700fb123d178","timestampMs":1750085872350,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.903+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 15:01:14 policy-pap | [2025-06-16T14:57:52.906+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"a72fc544-387e-487c-a1ea-700fb123d178","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"92b067b6-f5ea-4d41-a00e-ac6e59e5195a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872523","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:52.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange stopping 15:01:14 policy-pap | [2025-06-16T14:57:52.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange stopping timer 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a72fc544-387e-487c-a1ea-700fb123d178, expireMs=1750085902513] 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange stopping listener 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange stopped 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpStateChange successful 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c start publishing next request 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=029327b3-0b15-42c8-9721-7c452af01084, expireMs=1750085902907] 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:57:52.907+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:57:52.908+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"029327b3-0b15-42c8-9721-7c452af01084","timestampMs":1750085872893,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.915+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"029327b3-0b15-42c8-9721-7c452af01084","timestampMs":1750085872893,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.915+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:57:52.919+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"029327b3-0b15-42c8-9721-7c452af01084","timestampMs":1750085872893,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:57:52.919+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"029327b3-0b15-42c8-9721-7c452af01084","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"f9020cca-78a1-43b9-9aa6-fc6c4bc9e4fa","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872913","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=029327b3-0b15-42c8-9721-7c452af01084, expireMs=1750085902907] 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:57:52.926+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:57:52.927+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"029327b3-0b15-42c8-9721-7c452af01084","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"f9020cca-78a1-43b9-9aa6-fc6c4bc9e4fa","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085872913","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:57:52.928+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 029327b3-0b15-42c8-9721-7c452af01084 15:01:14 policy-pap | [2025-06-16T14:57:52.934+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:57:52.934+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T14:57:56.560+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 15:01:14 policy-pap | [2025-06-16T14:58:22.386+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=28474638-9707-4bbd-8aaa-789bc1608bfe, expireMs=1750085902385] 15:01:14 policy-pap | [2025-06-16T14:58:22.514+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a72fc544-387e-487c-a1ea-700fb123d178, expireMs=1750085902513] 15:01:14 policy-pap | [2025-06-16T14:58:51.792+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c423da31-fbdd-412b-a24a-3e2793263d69","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085931778","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:58:51.792+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c423da31-fbdd-412b-a24a-3e2793263d69","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085931778","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:58:51.793+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 15:01:14 policy-pap | [2025-06-16T14:58:54.376+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:58:54.378+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-7] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 15:01:14 policy-pap | [2025-06-16T14:58:54.379+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering a deploy for policy zoneB 1.0.6 15:01:14 policy-pap | [2025-06-16T14:58:54.380+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c opaGroup opa policies=1 15:01:14 policy-pap | [2025-06-16T14:58:54.381+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup 15:01:14 policy-pap | [2025-06-16T14:58:54.381+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup 15:01:14 policy-pap | [2025-06-16T14:58:54.407+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-16T14:58:54Z, user=policyadmin)] 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=39b5658c-5e3b-4103-b789-ca3be64640d6, expireMs=1750085964437] 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:58:54.437+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=39b5658c-5e3b-4103-b789-ca3be64640d6, expireMs=1750085964437] 15:01:14 policy-pap | [2025-06-16T14:58:54.438+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39b5658c-5e3b-4103-b789-ca3be64640d6","timestampMs":1750085934380,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:58:54.450+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39b5658c-5e3b-4103-b789-ca3be64640d6","timestampMs":1750085934380,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:58:54.450+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:58:54.453+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"39b5658c-5e3b-4103-b789-ca3be64640d6","timestampMs":1750085934380,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:58:54.453+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39b5658c-5e3b-4103-b789-ca3be64640d6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2dd33f87-288a-4a72-87c4-a1b89ba21549","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085934473","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39b5658c-5e3b-4103-b789-ca3be64640d6","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2dd33f87-288a-4a72-87c4-a1b89ba21549","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085934473","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=39b5658c-5e3b-4103-b789-ca3be64640d6, expireMs=1750085964437] 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:58:54.488+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:58:54.489+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 39b5658c-5e3b-4103-b789-ca3be64640d6 15:01:14 policy-pap | [2025-06-16T14:58:54.502+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:58:54.502+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T14:58:54.502+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:01:14 policy-pap | [2025-06-16T14:59:18.958+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:18.960+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 15:01:14 policy-pap | [2025-06-16T14:59:18.960+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy zoneB 1.0.6 15:01:14 policy-pap | [2025-06-16T14:59:18.960+00:00|INFO|SessionData|http-nio-6969-exec-8] add update opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c opaGroup opa policies=0 15:01:14 policy-pap | [2025-06-16T14:59:18.960+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:18.960+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:18.971+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-16T14:59:18Z, user=policyadmin)] 15:01:14 policy-pap | [2025-06-16T14:59:18.985+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:59:18.985+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:59:18.985+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:59:18.985+00:00|INFO|TimerManager|http-nio-6969-exec-8] update timer registered Timer [name=2f4ee97e-e981-4cd6-90fa-505790333b58, expireMs=1750085988985] 15:01:14 policy-pap | [2025-06-16T14:59:18.985+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:59:18.985+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:59:18.986+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"2f4ee97e-e981-4cd6-90fa-505790333b58","timestampMs":1750085958960,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:18.994+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"2f4ee97e-e981-4cd6-90fa-505790333b58","timestampMs":1750085958960,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:18.994+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:18.994+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"2f4ee97e-e981-4cd6-90fa-505790333b58","timestampMs":1750085958960,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:18.994+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:19.010+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"2f4ee97e-e981-4cd6-90fa-505790333b58","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2ab93fff-b361-4ed3-b748-e7e615842adb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085958997","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:19.010+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"2f4ee97e-e981-4cd6-90fa-505790333b58","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"2ab93fff-b361-4ed3-b748-e7e615842adb","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085958997","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 2f4ee97e-e981-4cd6-90fa-505790333b58 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=2f4ee97e-e981-4cd6-90fa-505790333b58, expireMs=1750085988985] 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:59:19.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:59:19.033+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:59:19.033+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T14:59:19.033+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 15:01:14 policy-pap | [2025-06-16T14:59:19.418+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:19.421+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-9] failed to undeploy policy: zoneB null 15:01:14 policy-pap | [2025-06-16T14:59:19.421+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-9] undeploy policy failed 15:01:14 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:01:14 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:01:14 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 15:01:14 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 15:01:14 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 15:01:14 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 15:01:14 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 15:01:14 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 15:01:14 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 15:01:14 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 15:01:14 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 15:01:14 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 15:01:14 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:01:14 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 15:01:14 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 15:01:14 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 15:01:14 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 15:01:14 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 15:01:14 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 15:01:14 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 15:01:14 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 15:01:14 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 15:01:14 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 15:01:14 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 15:01:14 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 15:01:14 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 15:01:14 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 15:01:14 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 15:01:14 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 15:01:14 policy-pap | [2025-06-16T14:59:20.249+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:20.250+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-10] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 15:01:14 policy-pap | [2025-06-16T14:59:20.250+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy vehicle 1.0.6 15:01:14 policy-pap | [2025-06-16T14:59:20.250+00:00|INFO|SessionData|http-nio-6969-exec-10] add update opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c opaGroup opa policies=1 15:01:14 policy-pap | [2025-06-16T14:59:20.250+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:20.250+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:20.263+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-16T14:59:20Z, user=policyadmin)] 15:01:14 policy-pap | [2025-06-16T14:59:20.273+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:59:20.273+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:59:20.273+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:59:20.273+00:00|INFO|TimerManager|http-nio-6969-exec-10] update timer registered Timer [name=ee93bcaf-6615-418e-922a-eb845e0869a2, expireMs=1750085990273] 15:01:14 policy-pap | [2025-06-16T14:59:20.273+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:59:20.273+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:59:20.274+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ee93bcaf-6615-418e-922a-eb845e0869a2","timestampMs":1750085960250,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:20.286+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ee93bcaf-6615-418e-922a-eb845e0869a2","timestampMs":1750085960250,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:20.290+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ee93bcaf-6615-418e-922a-eb845e0869a2","timestampMs":1750085960250,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:20.290+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:20.291+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:20.320+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee93bcaf-6615-418e-922a-eb845e0869a2","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"051d5acf-ecb1-44ff-bed7-8c5ccea8992f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085960307","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:20.320+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ee93bcaf-6615-418e-922a-eb845e0869a2","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"051d5acf-ecb1-44ff-bed7-8c5ccea8992f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085960307","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:20.321+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ee93bcaf-6615-418e-922a-eb845e0869a2 15:01:14 policy-pap | [2025-06-16T14:59:20.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:59:20.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:59:20.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:59:20.322+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ee93bcaf-6615-418e-922a-eb845e0869a2, expireMs=1750085990273] 15:01:14 policy-pap | [2025-06-16T14:59:20.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:59:20.322+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:59:20.331+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:59:20.331+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T14:59:20.332+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:01:14 policy-pap | [2025-06-16T14:59:24.438+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=39b5658c-5e3b-4103-b789-ca3be64640d6, expireMs=1750085964437] 15:01:14 policy-pap | [2025-06-16T14:59:44.736+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:44.736+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 15:01:14 policy-pap | [2025-06-16T14:59:44.736+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 15:01:14 policy-pap | [2025-06-16T14:59:44.736+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c opaGroup opa policies=0 15:01:14 policy-pap | [2025-06-16T14:59:44.736+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:44.737+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:44.745+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-16T14:59:44Z, user=policyadmin)] 15:01:14 policy-pap | [2025-06-16T14:59:44.754+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:59:44.754+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:59:44.754+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:59:44.754+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=c7ac162c-8487-4b7a-8206-a9bed30842a0, expireMs=1750086014754] 15:01:14 policy-pap | [2025-06-16T14:59:44.755+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:59:44.755+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"c7ac162c-8487-4b7a-8206-a9bed30842a0","timestampMs":1750085984736,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:44.755+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c7ac162c-8487-4b7a-8206-a9bed30842a0, expireMs=1750086014754] 15:01:14 policy-pap | [2025-06-16T14:59:44.755+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:59:44.765+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"c7ac162c-8487-4b7a-8206-a9bed30842a0","timestampMs":1750085984736,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:44.765+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:44.768+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"c7ac162c-8487-4b7a-8206-a9bed30842a0","timestampMs":1750085984736,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:44.768+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:44.774+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c7ac162c-8487-4b7a-8206-a9bed30842a0","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6aea1fe4-f501-464d-bc72-e2c87fc025da","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085984764","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:44.775+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c7ac162c-8487-4b7a-8206-a9bed30842a0 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c7ac162c-8487-4b7a-8206-a9bed30842a0","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"6aea1fe4-f501-464d-bc72-e2c87fc025da","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085984764","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c7ac162c-8487-4b7a-8206-a9bed30842a0, expireMs=1750086014754] 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:59:44.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:59:44.792+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:59:44.792+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T14:59:44.793+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 15:01:14 policy-pap | [2025-06-16T14:59:45.204+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:45.204+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-1] failed to undeploy policy: vehicle null 15:01:14 policy-pap | [2025-06-16T14:59:45.204+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-1] undeploy policy failed 15:01:14 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:01:14 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:01:14 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 15:01:14 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 15:01:14 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 15:01:14 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 15:01:14 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 15:01:14 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 15:01:14 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 15:01:14 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 15:01:14 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 15:01:14 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 15:01:14 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:01:14 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 15:01:14 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 15:01:14 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 15:01:14 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 15:01:14 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 15:01:14 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 15:01:14 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 15:01:14 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 15:01:14 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 15:01:14 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 15:01:14 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 15:01:14 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 15:01:14 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 15:01:14 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 15:01:14 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 15:01:14 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 15:01:14 policy-pap | [2025-06-16T14:59:45.946+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:45.946+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 15:01:14 policy-pap | [2025-06-16T14:59:45.947+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 15:01:14 policy-pap | [2025-06-16T14:59:45.947+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c opaGroup opa policies=1 15:01:14 policy-pap | [2025-06-16T14:59:45.947+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:45.947+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup 15:01:14 policy-pap | [2025-06-16T14:59:45.954+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-16T14:59:45Z, user=policyadmin)] 15:01:14 policy-pap | [2025-06-16T14:59:45.961+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T14:59:45.961+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T14:59:45.962+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T14:59:45.962+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=967b7747-4de9-4350-a28c-de20927bd02f, expireMs=1750086015962] 15:01:14 policy-pap | [2025-06-16T14:59:45.962+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T14:59:45.962+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T14:59:45.963+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"967b7747-4de9-4350-a28c-de20927bd02f","timestampMs":1750085985947,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:45.973+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"967b7747-4de9-4350-a28c-de20927bd02f","timestampMs":1750085985947,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:45.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:45.978+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"967b7747-4de9-4350-a28c-de20927bd02f","timestampMs":1750085985947,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T14:59:45.979+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"967b7747-4de9-4350-a28c-de20927bd02f","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"34e1f600-4598-41cc-b885-6ca5c41e05ef","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085986003","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"967b7747-4de9-4350-a28c-de20927bd02f","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"34e1f600-4598-41cc-b885-6ca5c41e05ef","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085986003","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=967b7747-4de9-4350-a28c-de20927bd02f, expireMs=1750086015962] 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 967b7747-4de9-4350-a28c-de20927bd02f 15:01:14 policy-pap | [2025-06-16T14:59:46.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T14:59:46.025+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T14:59:46.025+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T14:59:46.025+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:01:14 policy-pap | [2025-06-16T14:59:52.499+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"78f5e458-6c6f-4aff-a568-401e9f20ff47","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085992484","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:52.500+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"78f5e458-6c6f-4aff-a568-401e9f20ff47","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750085992484","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T14:59:52.501+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 15:01:14 policy-pap | [2025-06-16T14:59:56.573+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 15:01:14 policy-pap | [2025-06-16T15:00:10.652+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T15:00:10.652+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 15:01:14 policy-pap | [2025-06-16T15:00:10.652+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy abac 1.0.7 15:01:14 policy-pap | [2025-06-16T15:00:10.653+00:00|INFO|SessionData|http-nio-6969-exec-5] add update opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c opaGroup opa policies=0 15:01:14 policy-pap | [2025-06-16T15:00:10.653+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group opaGroup 15:01:14 policy-pap | [2025-06-16T15:00:10.653+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group opaGroup 15:01:14 policy-pap | [2025-06-16T15:00:10.661+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-16T15:00:10Z, user=policyadmin)] 15:01:14 policy-pap | [2025-06-16T15:00:10.668+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting 15:01:14 policy-pap | [2025-06-16T15:00:10.668+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting listener 15:01:14 policy-pap | [2025-06-16T15:00:10.668+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting timer 15:01:14 policy-pap | [2025-06-16T15:00:10.668+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=8f524fdb-2d83-46a9-9b0a-c29f7875f05b, expireMs=1750086040668] 15:01:14 policy-pap | [2025-06-16T15:00:10.668+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate starting enqueue 15:01:14 policy-pap | [2025-06-16T15:00:10.668+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate started 15:01:14 policy-pap | [2025-06-16T15:00:10.669+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","timestampMs":1750086010653,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T15:00:10.678+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","timestampMs":1750086010653,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T15:00:10.678+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T15:00:10.680+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"source":"pap-ef1c2203-595c-4ac7-b11f-f04e7efc9986","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","timestampMs":1750086010653,"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:01:14 policy-pap | [2025-06-16T15:00:10.680+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:01:14 policy-pap | [2025-06-16T15:00:10.691+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c240ed42-7228-4262-8309-afbd26eea197","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750086010679","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T15:00:10.692+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8f524fdb-2d83-46a9-9b0a-c29f7875f05b 15:01:14 policy-pap | [2025-06-16T15:00:10.692+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:01:14 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8f524fdb-2d83-46a9-9b0a-c29f7875f05b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c","requestId":"c240ed42-7228-4262-8309-afbd26eea197","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750086010679","deploymentInstanceInfo":""} 15:01:14 policy-pap | [2025-06-16T15:00:10.693+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping 15:01:14 policy-pap | [2025-06-16T15:00:10.693+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping enqueue 15:01:14 policy-pap | [2025-06-16T15:00:10.693+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping timer 15:01:14 policy-pap | [2025-06-16T15:00:10.693+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8f524fdb-2d83-46a9-9b0a-c29f7875f05b, expireMs=1750086040668] 15:01:14 policy-pap | [2025-06-16T15:00:10.693+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopping listener 15:01:14 policy-pap | [2025-06-16T15:00:10.693+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate stopped 15:01:14 policy-pap | [2025-06-16T15:00:10.702+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:01:14 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} 15:01:14 policy-pap | [2025-06-16T15:00:10.703+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c PdpUpdate successful 15:01:14 policy-pap | [2025-06-16T15:00:10.703+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-383e04e6-96ca-4aa9-bdcb-37f06cb6277c has no more requests 15:01:14 policy-pap | [2025-06-16T15:00:11.074+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 15:01:14 policy-pap | [2025-06-16T15:00:11.074+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-7] failed to undeploy policy: abac null 15:01:14 policy-pap | [2025-06-16T15:00:11.074+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-7] undeploy policy failed 15:01:14 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:01:14 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:01:14 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:01:14 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:01:14 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:01:14 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:01:14 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:01:14 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:01:14 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 15:01:14 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 15:01:14 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 15:01:14 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 15:01:14 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 15:01:14 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 15:01:14 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 15:01:14 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 15:01:14 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 15:01:14 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 15:01:14 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 15:01:14 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 15:01:14 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 15:01:14 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 15:01:14 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:01:14 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:01:14 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 15:01:14 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 15:01:14 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 15:01:14 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 15:01:14 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:01:14 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:01:14 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 15:01:14 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 15:01:14 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 15:01:14 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 15:01:14 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 15:01:14 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 15:01:14 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 15:01:14 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 15:01:14 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 15:01:14 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 15:01:14 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 15:01:14 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 15:01:14 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 15:01:14 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 15:01:14 policy-pap | [2025-06-16T15:00:14.754+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c7ac162c-8487-4b7a-8206-a9bed30842a0, expireMs=1750086014754] 15:01:14 postgres | The files belonging to this database system will be owned by user "postgres". 15:01:14 postgres | This user must also own the server process. 15:01:14 postgres | 15:01:14 postgres | The database cluster will be initialized with locale "en_US.utf8". 15:01:14 postgres | The default database encoding has accordingly been set to "UTF8". 15:01:14 postgres | The default text search configuration will be set to "english". 15:01:14 postgres | 15:01:14 postgres | Data page checksums are disabled. 15:01:14 postgres | 15:01:14 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 15:01:14 postgres | creating subdirectories ... ok 15:01:14 postgres | selecting dynamic shared memory implementation ... posix 15:01:14 postgres | selecting default max_connections ... 100 15:01:14 postgres | selecting default shared_buffers ... 128MB 15:01:14 postgres | selecting default time zone ... Etc/UTC 15:01:14 postgres | creating configuration files ... ok 15:01:14 postgres | running bootstrap script ... ok 15:01:14 postgres | performing post-bootstrap initialization ... ok 15:01:14 postgres | initdb: warning: enabling "trust" authentication for local connections 15:01:14 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 15:01:14 postgres | syncing data to disk ... ok 15:01:14 postgres | 15:01:14 postgres | 15:01:14 postgres | Success. You can now start the database server using: 15:01:14 postgres | 15:01:14 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 15:01:14 postgres | 15:01:14 postgres | waiting for server to start....2025-06-16 14:55:04.835 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 15:01:14 postgres | 2025-06-16 14:55:04.838 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 15:01:14 postgres | 2025-06-16 14:55:04.844 UTC [52] LOG: database system was shut down at 2025-06-16 14:55:04 UTC 15:01:14 postgres | 2025-06-16 14:55:04.849 UTC [49] LOG: database system is ready to accept connections 15:01:14 postgres | done 15:01:14 postgres | server started 15:01:14 postgres | 15:01:14 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 15:01:14 postgres | 15:01:14 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 15:01:14 postgres | #!/bin/bash -xv 15:01:14 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 15:01:14 postgres | # 15:01:14 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 15:01:14 postgres | # you may not use this file except in compliance with the License. 15:01:14 postgres | # You may obtain a copy of the License at 15:01:14 postgres | # 15:01:14 postgres | # http://www.apache.org/licenses/LICENSE-2.0 15:01:14 postgres | # 15:01:14 postgres | # Unless required by applicable law or agreed to in writing, software 15:01:14 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 15:01:14 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15:01:14 postgres | # See the License for the specific language governing permissions and 15:01:14 postgres | # limitations under the License. 15:01:14 postgres | 15:01:14 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 15:01:14 postgres | CREATE ROLE 15:01:14 postgres | 15:01:14 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | do 15:01:14 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 15:01:14 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 15:01:14 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 15:01:14 postgres | done 15:01:14 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 15:01:14 postgres | CREATE DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 15:01:14 postgres | ALTER DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 15:01:14 postgres | GRANT 15:01:14 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 15:01:14 postgres | CREATE DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 15:01:14 postgres | ALTER DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 15:01:14 postgres | GRANT 15:01:14 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 15:01:14 postgres | CREATE DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 15:01:14 postgres | ALTER DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 15:01:14 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 15:01:14 postgres | GRANT 15:01:14 postgres | CREATE DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 15:01:14 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 15:01:14 postgres | ALTER DATABASE 15:01:14 postgres | GRANT 15:01:14 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 15:01:14 postgres | CREATE DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 15:01:14 postgres | ALTER DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 15:01:14 postgres | GRANT 15:01:14 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:01:14 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 15:01:14 postgres | CREATE DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 15:01:14 postgres | ALTER DATABASE 15:01:14 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 15:01:14 postgres | GRANT 15:01:14 postgres | 15:01:14 postgres | 2025-06-16 14:55:06.321 UTC [49] LOG: received fast shutdown request 15:01:14 postgres | waiting for server to shut down....2025-06-16 14:55:06.323 UTC [49] LOG: aborting any active transactions 15:01:14 postgres | 2025-06-16 14:55:06.325 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 15:01:14 postgres | 2025-06-16 14:55:06.329 UTC [50] LOG: shutting down 15:01:14 postgres | 2025-06-16 14:55:06.330 UTC [50] LOG: checkpoint starting: shutdown immediate 15:01:14 postgres | 2025-06-16 14:55:06.805 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.332 s, sync=0.132 s, total=0.477 s; sync files=1788, longest=0.004 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 15:01:14 postgres | 2025-06-16 14:55:06.818 UTC [49] LOG: database system is shut down 15:01:14 postgres | done 15:01:14 postgres | server stopped 15:01:14 postgres | 15:01:14 postgres | PostgreSQL init process complete; ready for start up. 15:01:14 postgres | 15:01:14 postgres | 2025-06-16 14:55:06.852 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 15:01:14 postgres | 2025-06-16 14:55:06.852 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 15:01:14 postgres | 2025-06-16 14:55:06.852 UTC [1] LOG: listening on IPv6 address "::", port 5432 15:01:14 postgres | 2025-06-16 14:55:06.859 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 15:01:14 postgres | 2025-06-16 14:55:06.866 UTC [102] LOG: database system was shut down at 2025-06-16 14:55:06 UTC 15:01:14 postgres | 2025-06-16 14:55:06.871 UTC [1] LOG: database system is ready to accept connections 15:01:14 postgres | 2025-06-16 15:00:06.943 UTC [100] LOG: checkpoint starting: time 15:01:14 postgres | 2025-06-16 15:01:12.144 UTC [100] LOG: checkpoint complete: wrote 655 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=65.169 s, sync=0.022 s, total=65.201 s; sync files=519, longest=0.002 s, average=0.001 s; distance=3563 kB, estimate=3563 kB; lsn=0/31574E8, redo lsn=0/3155000 15:01:15 prometheus | time=2025-06-16T14:55:00.635Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 15:01:15 prometheus | time=2025-06-16T14:55:00.635Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 15:01:15 prometheus | time=2025-06-16T14:55:00.635Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 15:01:15 prometheus | time=2025-06-16T14:55:00.638Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 15:01:15 prometheus | time=2025-06-16T14:55:00.640Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 15:01:15 prometheus | time=2025-06-16T14:55:00.641Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 15:01:15 prometheus | time=2025-06-16T14:55:00.643Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 15:01:15 prometheus | time=2025-06-16T14:55:00.643Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 15:01:15 prometheus | time=2025-06-16T14:55:00.645Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 15:01:15 prometheus | time=2025-06-16T14:55:00.645Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.96µs 15:01:15 prometheus | time=2025-06-16T14:55:00.645Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 15:01:15 prometheus | time=2025-06-16T14:55:00.646Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=473.754µs 15:01:15 prometheus | time=2025-06-16T14:55:00.646Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=42.6µs wal_replay_duration=501.585µs wbl_replay_duration=190ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.96µs total_replay_duration=622.276µs 15:01:15 prometheus | time=2025-06-16T14:55:00.650Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 15:01:15 prometheus | time=2025-06-16T14:55:00.650Z level=INFO source=main.go:1290 msg="TSDB started" 15:01:15 prometheus | time=2025-06-16T14:55:00.650Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 15:01:15 prometheus | time=2025-06-16T14:55:00.652Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 15:01:15 prometheus | time=2025-06-16T14:55:00.652Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.71µs remote_storage=2.14µs web_handler=1.37µs query_engine=1.301µs scrape=445.994µs scrape_sd=326.573µs notify=265.272µs notify_sd=138.722µs rules=1.92µs tracing=8.58µs filename=/etc/prometheus/prometheus.yml totalDuration=2.11449ms 15:01:15 prometheus | time=2025-06-16T14:55:00.652Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 15:01:15 prometheus | time=2025-06-16T14:55:00.652Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 15:01:15 zookeeper | ===> User 15:01:15 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 15:01:15 zookeeper | ===> Configuring ... 15:01:15 zookeeper | ===> Running preflight checks ... 15:01:15 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 15:01:15 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 15:01:15 zookeeper | ===> Launching ... 15:01:15 zookeeper | ===> Launching zookeeper ... 15:01:15 zookeeper | [2025-06-16 14:55:04,692] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,694] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,694] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,694] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,694] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,695] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 15:01:15 zookeeper | [2025-06-16 14:55:04,696] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 15:01:15 zookeeper | [2025-06-16 14:55:04,696] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 15:01:15 zookeeper | [2025-06-16 14:55:04,696] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:01:15 zookeeper | [2025-06-16 14:55:04,697] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 15:01:15 zookeeper | [2025-06-16 14:55:04,708] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 15:01:15 zookeeper | [2025-06-16 14:55:04,710] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 15:01:15 zookeeper | [2025-06-16 14:55:04,710] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 15:01:15 zookeeper | [2025-06-16 14:55:04,712] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 15:01:15 zookeeper | [2025-06-16 14:55:04,719] INFO (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,719] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,719] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,719] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,719] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,719] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,720] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,720] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,720] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,720] INFO (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,721] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,722] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,722] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,722] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,722] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,722] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,722] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,723] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 15:01:15 zookeeper | [2025-06-16 14:55:04,723] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,723] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,724] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:01:15 zookeeper | [2025-06-16 14:55:04,725] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:01:15 zookeeper | [2025-06-16 14:55:04,727] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,727] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,728] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 15:01:15 zookeeper | [2025-06-16 14:55:04,728] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 15:01:15 zookeeper | [2025-06-16 14:55:04,728] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,748] INFO Logging initialized @371ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 15:01:15 zookeeper | [2025-06-16 14:55:04,800] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 15:01:15 zookeeper | [2025-06-16 14:55:04,800] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 15:01:15 zookeeper | [2025-06-16 14:55:04,817] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 15:01:15 zookeeper | [2025-06-16 14:55:04,849] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 15:01:15 zookeeper | [2025-06-16 14:55:04,849] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 15:01:15 zookeeper | [2025-06-16 14:55:04,850] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 15:01:15 zookeeper | [2025-06-16 14:55:04,856] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 15:01:15 zookeeper | [2025-06-16 14:55:04,865] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 15:01:15 zookeeper | [2025-06-16 14:55:04,874] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 15:01:15 zookeeper | [2025-06-16 14:55:04,874] INFO Started @501ms (org.eclipse.jetty.server.Server) 15:01:15 zookeeper | [2025-06-16 14:55:04,874] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,877] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 15:01:15 zookeeper | [2025-06-16 14:55:04,878] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 15:01:15 zookeeper | [2025-06-16 14:55:04,879] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 15:01:15 zookeeper | [2025-06-16 14:55:04,879] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 15:01:15 zookeeper | [2025-06-16 14:55:04,889] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 15:01:15 zookeeper | [2025-06-16 14:55:04,889] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 15:01:15 zookeeper | [2025-06-16 14:55:04,889] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 15:01:15 zookeeper | [2025-06-16 14:55:04,889] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 15:01:15 zookeeper | [2025-06-16 14:55:04,893] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 15:01:15 zookeeper | [2025-06-16 14:55:04,893] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 15:01:15 zookeeper | [2025-06-16 14:55:04,896] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 15:01:15 zookeeper | [2025-06-16 14:55:04,896] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 15:01:15 zookeeper | [2025-06-16 14:55:04,897] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:01:15 zookeeper | [2025-06-16 14:55:04,903] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 15:01:15 zookeeper | [2025-06-16 14:55:04,903] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 15:01:15 zookeeper | [2025-06-16 14:55:04,915] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 15:01:15 zookeeper | [2025-06-16 14:55:04,915] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 15:01:15 zookeeper | [2025-06-16 14:55:06,187] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 15:01:15 Tearing down containers... 15:01:15 Container policy-csit Stopping 15:01:15 Container policy-opa-pdp Stopping 15:01:15 Container grafana Stopping 15:01:15 Container policy-csit Stopped 15:01:15 Container policy-csit Removing 15:01:15 Container policy-csit Removed 15:01:15 Container grafana Stopped 15:01:15 Container grafana Removing 15:01:15 Container grafana Removed 15:01:15 Container prometheus Stopping 15:01:16 Container prometheus Stopped 15:01:16 Container prometheus Removing 15:01:16 Container prometheus Removed 15:01:25 Container policy-opa-pdp Stopped 15:01:25 Container policy-opa-pdp Removing 15:01:25 Container policy-opa-pdp Removed 15:01:25 Container policy-pap Stopping 15:01:36 Container policy-pap Stopped 15:01:36 Container policy-pap Removing 15:01:36 Container policy-pap Removed 15:01:36 Container kafka Stopping 15:01:36 Container policy-api Stopping 15:01:37 Container kafka Stopped 15:01:37 Container kafka Removing 15:01:37 Container kafka Removed 15:01:37 Container zookeeper Stopping 15:01:37 Container zookeeper Stopped 15:01:37 Container zookeeper Removing 15:01:37 Container zookeeper Removed 15:01:46 Container policy-api Stopped 15:01:46 Container policy-api Removing 15:01:46 Container policy-api Removed 15:01:46 Container policy-db-migrator Stopping 15:01:46 Container policy-db-migrator Stopped 15:01:46 Container policy-db-migrator Removing 15:01:46 Container policy-db-migrator Removed 15:01:46 Container postgres Stopping 15:01:46 Container postgres Stopped 15:01:46 Container postgres Removing 15:01:46 Container postgres Removed 15:01:46 Network compose_default Removing 15:01:47 Network compose_default Removed 15:01:47 $ ssh-agent -k 15:01:47 unset SSH_AUTH_SOCK; 15:01:47 unset SSH_AGENT_PID; 15:01:47 echo Agent pid 2035 killed; 15:01:47 [ssh-agent] Stopped. 15:01:47 Robot results publisher started... 15:01:47 INFO: Checking test criticality is deprecated and will be dropped in a future release! 15:01:47 -Parsing output xml: 15:01:47 Done! 15:01:47 -Copying log files to build dir: 15:01:47 Done! 15:01:47 -Assigning results to build: 15:01:47 Done! 15:01:47 -Checking thresholds: 15:01:47 Done! 15:01:47 Done publishing Robot results. 15:01:47 [PostBuildScript] - [INFO] Executing post build scripts. 15:01:47 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins11823396161447295707.sh 15:01:47 ---> sysstat.sh 15:01:48 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15009546184638702059.sh 15:01:48 ---> package-listing.sh 15:01:48 ++ facter osfamily 15:01:48 ++ tr '[:upper:]' '[:lower:]' 15:01:48 + OS_FAMILY=debian 15:01:48 + workspace=/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 15:01:48 + START_PACKAGES=/tmp/packages_start.txt 15:01:48 + END_PACKAGES=/tmp/packages_end.txt 15:01:48 + DIFF_PACKAGES=/tmp/packages_diff.txt 15:01:48 + PACKAGES=/tmp/packages_start.txt 15:01:48 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 15:01:48 + PACKAGES=/tmp/packages_end.txt 15:01:48 + case "${OS_FAMILY}" in 15:01:48 + dpkg -l 15:01:48 + grep '^ii' 15:01:48 + '[' -f /tmp/packages_start.txt ']' 15:01:48 + '[' -f /tmp/packages_end.txt ']' 15:01:48 + diff /tmp/packages_start.txt /tmp/packages_end.txt 15:01:48 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 15:01:48 + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 15:01:48 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 15:01:48 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins8429963181298043489.sh 15:01:48 ---> capture-instance-metadata.sh 15:01:48 Setup pyenv: 15:01:48 system 15:01:48 3.8.13 15:01:48 3.9.13 15:01:48 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:01:48 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-YgXf from file:/tmp/.os_lf_venv 15:01:50 lf-activate-venv(): INFO: Installing: lftools 15:01:59 lf-activate-venv(): INFO: Adding /tmp/venv-YgXf/bin to PATH 15:01:59 INFO: Running in OpenStack, capturing instance metadata 15:01:59 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins6421061497835780923.sh 15:01:59 provisioning config files... 15:01:59 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/config15747267652260893033tmp 15:02:00 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 15:02:00 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 15:02:00 [EnvInject] - Injecting environment variables from a build step. 15:02:00 [EnvInject] - Injecting as environment variables the properties content 15:02:00 SERVER_ID=logs 15:02:00 15:02:00 [EnvInject] - Variables injected successfully. 15:02:00 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins1678904830740904333.sh 15:02:00 ---> create-netrc.sh 15:02:00 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins17584171005431523973.sh 15:02:00 ---> python-tools-install.sh 15:02:00 Setup pyenv: 15:02:00 system 15:02:00 3.8.13 15:02:00 3.9.13 15:02:00 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:02:00 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-YgXf from file:/tmp/.os_lf_venv 15:02:02 lf-activate-venv(): INFO: Installing: lftools 15:02:10 lf-activate-venv(): INFO: Adding /tmp/venv-YgXf/bin to PATH 15:02:10 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins5001129739189165210.sh 15:02:10 ---> sudo-logs.sh 15:02:10 Archiving 'sudo' log.. 15:02:10 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins3886950376762225892.sh 15:02:10 ---> job-cost.sh 15:02:10 Setup pyenv: 15:02:11 system 15:02:11 3.8.13 15:02:11 3.9.13 15:02:11 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:02:11 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-YgXf from file:/tmp/.os_lf_venv 15:02:13 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 15:02:18 lf-activate-venv(): INFO: Adding /tmp/venv-YgXf/bin to PATH 15:02:18 INFO: No Stack... 15:02:18 INFO: Retrieving Pricing Info for: v3-standard-8 15:02:18 INFO: Archiving Costs 15:02:18 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash -l /tmp/jenkins7882485030487315476.sh 15:02:18 ---> logs-deploy.sh 15:02:18 Setup pyenv: 15:02:18 system 15:02:18 3.8.13 15:02:18 3.9.13 15:02:18 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:02:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-YgXf from file:/tmp/.os_lf_venv 15:02:21 lf-activate-venv(): INFO: Installing: lftools 15:02:29 lf-activate-venv(): INFO: Adding /tmp/venv-YgXf/bin to PATH 15:02:29 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-verify-opa-pdp/160 15:02:29 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 15:02:30 Archives upload complete. 15:02:30 INFO: archiving logs to Nexus 15:02:31 ---> uname -a: 15:02:31 Linux prd-ubuntu1804-docker-8c-8g-21631 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 15:02:31 15:02:31 15:02:31 ---> lscpu: 15:02:31 Architecture: x86_64 15:02:31 CPU op-mode(s): 32-bit, 64-bit 15:02:31 Byte Order: Little Endian 15:02:31 CPU(s): 8 15:02:31 On-line CPU(s) list: 0-7 15:02:31 Thread(s) per core: 1 15:02:31 Core(s) per socket: 1 15:02:31 Socket(s): 8 15:02:31 NUMA node(s): 1 15:02:31 Vendor ID: AuthenticAMD 15:02:31 CPU family: 23 15:02:31 Model: 49 15:02:31 Model name: AMD EPYC-Rome Processor 15:02:31 Stepping: 0 15:02:31 CPU MHz: 2799.998 15:02:31 BogoMIPS: 5599.99 15:02:31 Virtualization: AMD-V 15:02:31 Hypervisor vendor: KVM 15:02:31 Virtualization type: full 15:02:31 L1d cache: 32K 15:02:31 L1i cache: 32K 15:02:31 L2 cache: 512K 15:02:31 L3 cache: 16384K 15:02:31 NUMA node0 CPU(s): 0-7 15:02:31 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 15:02:31 15:02:31 15:02:31 ---> nproc: 15:02:31 8 15:02:31 15:02:31 15:02:31 ---> df -h: 15:02:31 Filesystem Size Used Avail Use% Mounted on 15:02:31 udev 16G 0 16G 0% /dev 15:02:31 tmpfs 3.2G 708K 3.2G 1% /run 15:02:31 /dev/vda1 155G 15G 141G 10% / 15:02:31 tmpfs 16G 0 16G 0% /dev/shm 15:02:31 tmpfs 5.0M 0 5.0M 0% /run/lock 15:02:31 tmpfs 16G 0 16G 0% /sys/fs/cgroup 15:02:31 /dev/vda15 105M 4.4M 100M 5% /boot/efi 15:02:31 tmpfs 3.2G 0 3.2G 0% /run/user/1001 15:02:31 15:02:31 15:02:31 ---> free -m: 15:02:31 total used free shared buff/cache available 15:02:31 Mem: 32167 876 24054 0 7236 30835 15:02:31 Swap: 1023 0 1023 15:02:31 15:02:31 15:02:31 ---> ip addr: 15:02:31 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 15:02:31 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 15:02:31 inet 127.0.0.1/8 scope host lo 15:02:31 valid_lft forever preferred_lft forever 15:02:31 inet6 ::1/128 scope host 15:02:31 valid_lft forever preferred_lft forever 15:02:31 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 15:02:31 link/ether fa:16:3e:42:5a:0f brd ff:ff:ff:ff:ff:ff 15:02:31 inet 10.30.106.233/23 brd 10.30.107.255 scope global dynamic ens3 15:02:31 valid_lft 85816sec preferred_lft 85816sec 15:02:31 inet6 fe80::f816:3eff:fe42:5a0f/64 scope link 15:02:31 valid_lft forever preferred_lft forever 15:02:31 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 15:02:31 link/ether 02:42:ad:3e:c7:5b brd ff:ff:ff:ff:ff:ff 15:02:31 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 15:02:31 valid_lft forever preferred_lft forever 15:02:31 inet6 fe80::42:adff:fe3e:c75b/64 scope link 15:02:31 valid_lft forever preferred_lft forever 15:02:31 15:02:31 15:02:31 ---> sar -b -r -n DEV: 15:02:31 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21631) 06/16/25 _x86_64_ (8 CPU) 15:02:31 15:02:31 14:52:49 LINUX RESTART (8 CPU) 15:02:31 15:02:31 14:53:01 tps rtps wtps bread/s bwrtn/s 15:02:31 14:54:02 413.31 74.00 339.31 5344.04 49445.23 15:02:31 14:55:01 552.56 24.70 527.85 2730.46 211571.81 15:02:31 14:56:01 270.77 0.10 270.67 5.73 16442.86 15:02:31 14:57:01 8.17 0.00 8.17 0.00 4982.90 15:02:31 14:58:01 9.57 0.02 9.55 0.13 4985.04 15:02:31 14:59:01 221.06 0.43 220.63 37.59 38658.62 15:02:31 15:00:01 11.46 0.00 11.46 0.00 5030.72 15:02:31 15:01:01 13.73 0.00 13.73 0.00 5103.42 15:02:31 15:02:01 58.62 1.27 57.36 101.72 5962.61 15:02:31 Average: 172.55 11.14 161.41 909.94 37701.04 15:02:31 15:02:31 14:53:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:02:31 14:54:02 30150804 31681464 2788416 8.47 67532 1775712 1401028 4.12 870076 1632224 157496 15:02:31 14:55:01 25102960 31476764 7836260 23.79 149620 6314060 4123532 12.13 1174636 6074876 1436 15:02:31 14:56:01 23353320 30005948 9585900 29.10 163624 6582844 7369440 21.68 2862392 6078884 2212 15:02:31 14:57:01 23331704 29984984 9607516 29.17 163780 6584036 7591776 22.34 2884224 6077456 180 15:02:31 14:58:01 23309596 29958012 9629624 29.23 163976 6579452 7687848 22.62 2911428 6070648 96 15:02:31 14:59:01 22683752 29871732 10255468 31.13 204648 7024792 8007264 23.56 3111828 6435836 2016 15:02:31 15:00:01 22627464 29816704 10311756 31.31 204788 7025556 8011928 23.57 3173632 6430624 240 15:02:31 15:01:01 22623492 29813220 10315728 31.32 204952 7025756 8036148 23.64 3176652 6430336 456 15:02:31 15:02:01 24648516 31597624 8290704 25.17 206092 6779008 1635448 4.81 1452784 6204272 36364 15:02:31 Average: 24203512 30467384 8735708 26.52 169890 6187913 5984935 17.61 2401961 5715017 22277 15:02:31 15:02:31 14:53:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 15:02:31 14:54:02 lo 2.07 2.07 0.22 0.22 0.00 0.00 0.00 0.00 15:02:31 14:54:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:02:31 14:54:02 ens3 493.32 336.26 1705.99 82.12 0.00 0.00 0.00 0.00 15:02:31 14:55:01 br-016bbf9f036b 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 15:02:31 14:55:01 veth0e699f5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:02:31 14:55:01 vethbe736de 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 15:02:31 14:55:01 lo 13.69 13.69 1.25 1.25 0.00 0.00 0.00 0.00 15:02:31 14:56:01 veth27f2857 1.08 1.30 0.07 0.07 0.00 0.00 0.00 0.00 15:02:31 14:56:01 br-016bbf9f036b 38.74 44.44 2.72 256.84 0.00 0.00 0.00 0.00 15:02:31 14:56:01 vethbe736de 0.15 0.42 0.01 0.02 0.00 0.00 0.00 0.00 15:02:31 14:56:01 lo 1.47 1.47 0.12 0.12 0.00 0.00 0.00 0.00 15:02:31 14:57:01 veth27f2857 1.23 1.63 0.16 0.18 0.00 0.00 0.00 0.00 15:02:31 14:57:01 br-016bbf9f036b 0.52 0.38 0.03 0.02 0.00 0.00 0.00 0.00 15:02:31 14:57:01 vethbe736de 0.35 0.33 0.04 0.90 0.00 0.00 0.00 0.00 15:02:31 14:57:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 15:02:31 14:58:01 veth27f2857 3.23 5.00 0.53 0.57 0.00 0.00 0.00 0.00 15:02:31 14:58:01 br-016bbf9f036b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:02:31 14:58:01 vethbe736de 0.58 0.58 0.06 1.16 0.00 0.00 0.00 0.00 15:02:31 14:58:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 15:02:31 14:59:01 veth27f2857 3.27 5.10 0.51 0.55 0.00 0.00 0.00 0.00 15:02:31 14:59:01 br-016bbf9f036b 0.25 0.27 0.02 0.02 0.00 0.00 0.00 0.00 15:02:31 14:59:01 vethbe736de 0.60 0.67 0.06 1.22 0.00 0.00 0.00 0.00 15:02:31 14:59:01 lo 2.60 2.60 0.22 0.22 0.00 0.00 0.00 0.00 15:02:31 15:00:01 veth27f2857 4.77 6.83 0.73 0.93 0.00 0.00 0.00 0.00 15:02:31 15:00:01 br-016bbf9f036b 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:02:31 15:00:01 vethbe736de 0.62 0.63 0.06 1.28 0.00 0.00 0.00 0.00 15:02:31 15:00:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 15:02:31 15:01:01 veth27f2857 4.63 6.42 0.63 0.64 0.00 0.00 0.00 0.00 15:02:31 15:01:01 br-016bbf9f036b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:02:31 15:01:01 vethbe736de 0.58 0.57 0.06 1.28 0.00 0.00 0.00 0.00 15:02:31 15:01:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 15:02:31 15:02:01 lo 2.60 2.60 0.24 0.24 0.00 0.00 0.00 0.00 15:02:31 15:02:01 docker0 131.51 182.94 8.39 1348.20 0.00 0.00 0.00 0.00 15:02:31 15:02:01 ens3 2034.31 1276.15 37370.28 192.91 0.00 0.00 0.00 0.00 15:02:31 Average: lo 3.05 3.05 0.27 0.27 0.00 0.00 0.00 0.00 15:02:31 Average: docker0 14.64 20.36 0.93 150.07 0.00 0.00 0.00 0.00 15:02:31 Average: ens3 225.78 141.40 4159.18 21.41 0.00 0.00 0.00 0.00 15:02:31 15:02:31 15:02:31 ---> sar -P ALL: 15:02:31 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21631) 06/16/25 _x86_64_ (8 CPU) 15:02:31 15:02:31 14:52:49 LINUX RESTART (8 CPU) 15:02:31 15:02:31 14:53:01 CPU %user %nice %system %iowait %steal %idle 15:02:31 14:54:02 all 10.05 0.00 1.34 1.68 0.04 86.89 15:02:31 14:54:02 0 4.12 0.00 1.09 3.14 0.02 91.63 15:02:31 14:54:02 1 6.28 0.00 0.97 0.52 0.07 92.17 15:02:31 14:54:02 2 15.38 0.00 1.44 0.75 0.05 82.38 15:02:31 14:54:02 3 17.41 0.00 2.39 3.62 0.05 76.54 15:02:31 14:54:02 4 10.83 0.00 0.77 0.97 0.02 87.41 15:02:31 14:54:02 5 3.09 0.00 2.01 4.08 0.05 90.77 15:02:31 14:54:02 6 7.12 0.00 0.72 0.12 0.03 92.01 15:02:31 14:54:02 7 16.10 0.00 1.38 0.25 0.03 82.24 15:02:31 14:55:01 all 19.33 0.00 8.47 4.49 0.11 67.61 15:02:31 14:55:01 0 15.45 0.00 8.26 1.41 0.14 74.74 15:02:31 14:55:01 1 15.04 0.00 8.12 7.08 0.09 69.68 15:02:31 14:55:01 2 20.77 0.00 7.24 0.97 0.12 70.90 15:02:31 14:55:01 3 19.38 0.00 8.46 1.94 0.10 70.12 15:02:31 14:55:01 4 32.11 0.00 9.88 5.15 0.10 52.76 15:02:31 14:55:01 5 16.45 0.00 8.35 2.81 0.10 72.29 15:02:31 14:55:01 6 16.44 0.00 10.01 14.70 0.12 58.73 15:02:31 14:55:01 7 18.92 0.00 7.44 1.95 0.07 71.63 15:02:31 14:56:01 all 26.84 0.00 3.56 10.32 0.11 59.18 15:02:31 14:56:01 0 28.54 0.00 3.65 0.49 0.10 67.23 15:02:31 14:56:01 1 27.96 0.00 3.96 0.85 0.08 67.15 15:02:31 14:56:01 2 26.98 0.00 3.95 22.14 0.12 46.81 15:02:31 14:56:01 3 22.77 0.00 2.91 21.86 0.07 52.40 15:02:31 14:56:01 4 26.18 0.00 3.28 20.48 0.10 49.96 15:02:31 14:56:01 5 22.86 0.00 3.25 15.54 0.07 58.27 15:02:31 14:56:01 6 36.86 0.00 4.05 0.44 0.10 58.56 15:02:31 14:56:01 7 22.58 0.00 3.44 0.62 0.19 73.17 15:02:31 14:57:01 all 1.28 0.00 0.22 0.08 0.04 98.38 15:02:31 14:57:01 0 1.13 0.00 0.27 0.00 0.03 98.57 15:02:31 14:57:01 1 1.30 0.00 0.20 0.00 0.05 98.45 15:02:31 14:57:01 2 1.82 0.00 0.25 0.00 0.03 97.89 15:02:31 14:57:01 3 1.25 0.00 0.17 0.57 0.03 97.98 15:02:31 14:57:01 4 1.50 0.00 0.30 0.00 0.07 98.13 15:02:31 14:57:01 5 1.13 0.00 0.13 0.00 0.03 98.70 15:02:31 14:57:01 6 0.94 0.00 0.22 0.02 0.03 98.80 15:02:31 14:57:01 7 1.20 0.00 0.18 0.02 0.05 98.55 15:02:31 14:58:01 all 2.04 0.00 0.31 0.08 0.04 97.53 15:02:31 14:58:01 0 1.88 0.00 0.22 0.00 0.03 97.87 15:02:31 14:58:01 1 2.69 0.00 0.28 0.00 0.03 96.99 15:02:31 14:58:01 2 0.99 0.00 0.20 0.00 0.03 98.78 15:02:31 14:58:01 3 2.10 0.00 0.32 0.57 0.07 96.94 15:02:31 14:58:01 4 2.79 0.00 0.30 0.03 0.05 96.83 15:02:31 14:58:01 5 1.85 0.00 0.35 0.00 0.03 97.77 15:02:31 14:58:01 6 2.69 0.00 0.32 0.02 0.05 96.93 15:02:31 14:58:01 7 1.31 0.00 0.49 0.02 0.05 98.14 15:02:31 14:59:01 all 9.63 0.00 2.83 1.06 0.07 86.42 15:02:31 14:59:01 0 9.62 0.00 2.97 0.17 0.07 87.18 15:02:31 14:59:01 1 16.69 0.00 3.42 1.91 0.07 77.91 15:02:31 14:59:01 2 6.44 0.00 2.40 0.22 0.05 90.89 15:02:31 14:59:01 3 8.76 0.00 2.42 3.14 0.07 85.62 15:02:31 14:59:01 4 9.53 0.00 3.01 0.50 0.05 86.90 15:02:31 14:59:01 5 6.24 0.00 2.44 2.06 0.08 89.18 15:02:31 14:59:01 6 11.58 0.00 3.76 0.15 0.07 84.44 15:02:31 14:59:01 7 8.13 0.00 2.20 0.35 0.05 89.27 15:02:31 15:00:01 all 3.81 0.00 0.67 0.09 0.05 95.37 15:02:31 15:00:01 0 3.55 0.00 0.51 0.10 0.07 95.77 15:02:31 15:00:01 1 4.52 0.00 0.57 0.02 0.05 94.85 15:02:31 15:00:01 2 4.51 0.00 0.47 0.00 0.03 94.99 15:02:31 15:00:01 3 5.30 0.00 0.74 0.53 0.05 93.38 15:02:31 15:00:01 4 2.84 0.00 1.24 0.05 0.03 95.84 15:02:31 15:00:01 5 2.81 0.00 0.52 0.02 0.08 96.58 15:02:31 15:00:01 6 3.94 0.00 0.57 0.02 0.03 95.45 15:02:31 15:00:01 7 3.04 0.00 0.79 0.00 0.08 96.09 15:02:31 15:01:01 all 1.49 0.00 0.31 0.08 0.05 98.06 15:02:31 15:01:01 0 3.43 0.00 0.31 0.10 0.05 96.11 15:02:31 15:01:01 1 1.95 0.00 0.27 0.02 0.02 97.75 15:02:31 15:01:01 2 1.05 0.00 0.28 0.00 0.02 98.65 15:02:31 15:01:01 3 1.34 0.00 0.25 0.52 0.07 97.83 15:02:31 15:01:01 4 0.87 0.00 0.50 0.00 0.07 98.56 15:02:31 15:01:01 5 1.32 0.00 0.25 0.02 0.07 98.35 15:02:31 15:01:01 6 1.25 0.00 0.25 0.00 0.05 98.45 15:02:31 15:01:01 7 0.70 0.00 0.38 0.00 0.07 98.85 15:02:31 15:02:01 all 4.19 0.00 0.95 0.19 0.04 94.63 15:02:31 15:02:01 0 3.00 0.00 0.68 0.13 0.03 96.16 15:02:31 15:02:01 1 1.80 0.00 0.94 0.07 0.05 97.14 15:02:31 15:02:01 2 1.45 0.00 0.79 0.03 0.03 97.69 15:02:31 15:02:01 3 17.24 0.00 1.30 0.52 0.05 80.89 15:02:31 15:02:01 4 2.02 0.00 0.99 0.58 0.03 96.38 15:02:31 15:02:01 5 2.40 0.00 0.99 0.05 0.02 96.54 15:02:31 15:02:01 6 1.32 0.00 0.82 0.03 0.05 97.78 15:02:31 15:02:01 7 4.28 0.00 1.10 0.07 0.05 94.50 15:02:31 Average: all 8.67 0.00 2.04 1.99 0.06 87.25 15:02:31 Average: 0 7.79 0.00 1.95 0.61 0.06 89.59 15:02:31 Average: 1 8.61 0.00 2.04 1.13 0.06 88.17 15:02:31 Average: 2 8.74 0.00 1.86 2.68 0.05 86.67 15:02:31 Average: 3 10.57 0.00 2.07 3.70 0.06 83.59 15:02:31 Average: 4 9.74 0.00 2.22 3.06 0.06 84.93 15:02:31 Average: 5 6.42 0.00 2.01 2.73 0.06 88.79 15:02:31 Average: 6 9.06 0.00 2.25 1.65 0.06 86.98 15:02:31 Average: 7 8.43 0.00 1.91 0.36 0.07 89.23 15:02:31 15:02:31 15:02:31