14:54:50 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141264 14:54:50 Running as SYSTEM 14:54:50 [EnvInject] - Loading node environment variables. 14:54:50 Building remotely on prd-ubuntu1804-docker-8c-8g-20904 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp 14:54:50 [ssh-agent] Looking for ssh-agent implementation... 14:54:50 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:54:50 $ ssh-agent 14:54:50 SSH_AUTH_SOCK=/tmp/ssh-tzLpaNzTNqqV/agent.2049 14:54:50 SSH_AGENT_PID=2051 14:54:50 [ssh-agent] Started. 14:54:50 Running ssh-add (command line suppressed) 14:54:50 Identity added: /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_6775323941563531320.key (/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/private_key_6775323941563531320.key) 14:54:50 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 14:54:50 The recommended git tool is: NONE 14:54:52 using credential onap-jenkins-ssh 14:54:52 Wiping out workspace first. 14:54:52 Cloning the remote Git repository 14:54:52 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 14:54:52 > git init /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp # timeout=10 14:54:52 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:54:52 > git --version # timeout=10 14:54:52 > git --version # 'git version 2.17.1' 14:54:52 using GIT_SSH to set credentials Gerrit user 14:54:52 Verifying host key using manually-configured host key entries 14:54:52 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 14:54:52 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:54:52 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:54:53 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:54:53 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:54:53 using GIT_SSH to set credentials Gerrit user 14:54:53 Verifying host key using manually-configured host key entries 14:54:53 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/64/141264/1 # timeout=30 14:54:53 > git rev-parse 473f78ecac5fb75e5968b31a5bab95eaba72c803^{commit} # timeout=10 14:54:53 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 14:54:53 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/changes/64/141264/1) 14:54:53 > git config core.sparsecheckout # timeout=10 14:54:53 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 14:54:56 Commit message: "Add Fix fail handling in ACM runtime in CSIT" 14:54:56 > git rev-parse FETCH_HEAD^{commit} # timeout=10 14:54:56 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 14:54:56 provisioning config files... 14:54:56 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:54:56 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:54:56 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins10686153200799223814.sh 14:54:56 ---> python-tools-install.sh 14:54:56 Setup pyenv: 14:54:57 * system (set by /opt/pyenv/version) 14:54:57 * 3.8.13 (set by /opt/pyenv/version) 14:54:57 * 3.9.13 (set by /opt/pyenv/version) 14:54:57 * 3.10.6 (set by /opt/pyenv/version) 14:55:01 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-qCrN 14:55:01 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:55:05 lf-activate-venv(): INFO: Installing: lftools 14:55:29 lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH 14:55:29 Generating Requirements File 14:55:47 Python 3.10.6 14:55:48 pip 25.1.1 from /tmp/venv-qCrN/lib/python3.10/site-packages/pip (python 3.10) 14:55:48 appdirs==1.4.4 14:55:48 argcomplete==3.6.2 14:55:48 aspy.yaml==1.3.0 14:55:48 attrs==25.3.0 14:55:48 autopage==0.5.2 14:55:48 beautifulsoup4==4.13.4 14:55:48 boto3==1.38.36 14:55:48 botocore==1.38.36 14:55:48 bs4==0.0.2 14:55:48 cachetools==5.5.2 14:55:48 certifi==2025.4.26 14:55:48 cffi==1.17.1 14:55:48 cfgv==3.4.0 14:55:48 chardet==5.2.0 14:55:48 charset-normalizer==3.4.2 14:55:48 click==8.2.1 14:55:48 cliff==4.10.0 14:55:48 cmd2==2.6.1 14:55:48 cryptography==3.3.2 14:55:48 debtcollector==3.0.0 14:55:48 decorator==5.2.1 14:55:48 defusedxml==0.7.1 14:55:48 Deprecated==1.2.18 14:55:48 distlib==0.3.9 14:55:48 dnspython==2.7.0 14:55:48 docker==7.1.0 14:55:48 dogpile.cache==1.4.0 14:55:48 durationpy==0.10 14:55:48 email_validator==2.2.0 14:55:48 filelock==3.18.0 14:55:48 future==1.0.0 14:55:48 gitdb==4.0.12 14:55:48 GitPython==3.1.44 14:55:48 google-auth==2.40.3 14:55:48 httplib2==0.22.0 14:55:48 identify==2.6.12 14:55:48 idna==3.10 14:55:48 importlib-resources==1.5.0 14:55:48 iso8601==2.1.0 14:55:48 Jinja2==3.1.6 14:55:48 jmespath==1.0.1 14:55:48 jsonpatch==1.33 14:55:48 jsonpointer==3.0.0 14:55:48 jsonschema==4.24.0 14:55:48 jsonschema-specifications==2025.4.1 14:55:48 keystoneauth1==5.11.1 14:55:48 kubernetes==33.1.0 14:55:48 lftools==0.37.13 14:55:48 lxml==5.4.0 14:55:48 MarkupSafe==3.0.2 14:55:48 msgpack==1.1.1 14:55:48 multi_key_dict==2.0.3 14:55:48 munch==4.0.0 14:55:48 netaddr==1.3.0 14:55:48 niet==1.4.2 14:55:48 nodeenv==1.9.1 14:55:48 oauth2client==4.1.3 14:55:48 oauthlib==3.2.2 14:55:48 openstacksdk==4.6.0 14:55:48 os-client-config==2.1.0 14:55:48 os-service-types==1.7.0 14:55:48 osc-lib==4.0.2 14:55:48 oslo.config==9.8.0 14:55:48 oslo.context==6.0.0 14:55:48 oslo.i18n==6.5.1 14:55:48 oslo.log==7.1.0 14:55:48 oslo.serialization==5.7.0 14:55:48 oslo.utils==9.0.0 14:55:48 packaging==25.0 14:55:48 pbr==6.1.1 14:55:48 platformdirs==4.3.8 14:55:48 prettytable==3.16.0 14:55:48 psutil==7.0.0 14:55:48 pyasn1==0.6.1 14:55:48 pyasn1_modules==0.4.2 14:55:48 pycparser==2.22 14:55:48 pygerrit2==2.0.15 14:55:48 PyGithub==2.6.1 14:55:48 PyJWT==2.10.1 14:55:48 PyNaCl==1.5.0 14:55:48 pyparsing==2.4.7 14:55:48 pyperclip==1.9.0 14:55:48 pyrsistent==0.20.0 14:55:48 python-cinderclient==9.7.0 14:55:48 python-dateutil==2.9.0.post0 14:55:48 python-heatclient==4.2.0 14:55:48 python-jenkins==1.8.2 14:55:48 python-keystoneclient==5.6.0 14:55:48 python-magnumclient==4.8.1 14:55:48 python-openstackclient==8.1.0 14:55:48 python-swiftclient==4.8.0 14:55:48 PyYAML==6.0.2 14:55:48 referencing==0.36.2 14:55:48 requests==2.32.4 14:55:48 requests-oauthlib==2.0.0 14:55:48 requestsexceptions==1.4.0 14:55:48 rfc3986==2.0.0 14:55:48 rpds-py==0.25.1 14:55:48 rsa==4.9.1 14:55:48 ruamel.yaml==0.18.14 14:55:48 ruamel.yaml.clib==0.2.12 14:55:48 s3transfer==0.13.0 14:55:48 simplejson==3.20.1 14:55:48 six==1.17.0 14:55:48 smmap==5.0.2 14:55:48 soupsieve==2.7 14:55:48 stevedore==5.4.1 14:55:48 tabulate==0.9.0 14:55:48 toml==0.10.2 14:55:48 tomlkit==0.13.3 14:55:48 tqdm==4.67.1 14:55:48 typing_extensions==4.14.0 14:55:48 tzdata==2025.2 14:55:48 urllib3==1.26.20 14:55:48 virtualenv==20.31.2 14:55:48 wcwidth==0.2.13 14:55:48 websocket-client==1.8.0 14:55:48 wrapt==1.17.2 14:55:48 xdg==6.0.0 14:55:48 xmltodict==0.14.2 14:55:48 yq==3.4.3 14:55:48 [EnvInject] - Injecting environment variables from a build step. 14:55:48 [EnvInject] - Injecting as environment variables the properties content 14:55:48 SET_JDK_VERSION=openjdk17 14:55:48 GIT_URL="git://cloud.onap.org/mirror" 14:55:48 14:55:48 [EnvInject] - Variables injected successfully. 14:55:48 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh /tmp/jenkins2896797054681847457.sh 14:55:48 ---> update-java-alternatives.sh 14:55:48 ---> Updating Java version 14:55:48 ---> Ubuntu/Debian system detected 14:55:48 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 14:55:48 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 14:55:48 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 14:55:49 openjdk version "17.0.4" 2022-07-19 14:55:49 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 14:55:49 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 14:55:49 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 14:55:49 [EnvInject] - Injecting environment variables from a build step. 14:55:49 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 14:55:49 [EnvInject] - Variables injected successfully. 14:55:49 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/sh -xe /tmp/jenkins15184397887947982739.sh 14:55:49 + /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/run-project-csit.sh xacml-pdp 14:55:49 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 14:55:49 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 14:55:49 Configure a credential helper to remove this warning. See 14:55:49 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 14:55:49 14:55:49 Login Succeeded 14:55:49 docker: 'compose' is not a docker command. 14:55:49 See 'docker --help' 14:55:49 Docker Compose Plugin not installed. Installing now... 14:55:49 % Total % Received % Xferd Average Speed Time Time Time Current 14:55:49 Dload Upload Total Spent Left Speed 14:55:49 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 14:55:50 100 60.2M 100 60.2M 0 0 71.7M 0 --:--:-- --:--:-- --:--:-- 71.7M 14:55:50 Setting project configuration for: xacml-pdp 14:55:50 Configuring docker compose... 14:55:55 Starting xacml-pdp using postgres + Grafana/Prometheus 14:55:55 xacml-pdp Pulling 14:55:55 api Pulling 14:55:55 zookeeper Pulling 14:55:55 policy-db-migrator Pulling 14:55:55 prometheus Pulling 14:55:55 postgres Pulling 14:55:55 kafka Pulling 14:55:55 grafana Pulling 14:55:55 pap Pulling 14:55:57 da9db072f522 Pulling fs layer 14:55:57 96e38c8865ba Pulling fs layer 14:55:57 795b910b71c0 Pulling fs layer 14:55:57 d1bdb495a7aa Pulling fs layer 14:55:57 0444d3911dbb Pulling fs layer 14:55:57 b801adf990e2 Pulling fs layer 14:55:57 d1bdb495a7aa Waiting 14:55:57 0444d3911dbb Waiting 14:55:57 b801adf990e2 Waiting 14:55:57 da9db072f522 Pulling fs layer 14:55:57 96e38c8865ba Pulling fs layer 14:55:57 e5d7009d9e55 Pulling fs layer 14:55:57 1ec5fb03eaee Pulling fs layer 14:55:57 d3165a332ae3 Pulling fs layer 14:55:57 e5d7009d9e55 Waiting 14:55:57 c124ba1a8b26 Pulling fs layer 14:55:57 1ec5fb03eaee Waiting 14:55:57 6394804c2196 Pulling fs layer 14:55:57 6394804c2196 Waiting 14:55:57 da9db072f522 Pulling fs layer 14:55:57 96e38c8865ba Pulling fs layer 14:55:57 5e06c6bed798 Pulling fs layer 14:55:57 684be6598fc9 Pulling fs layer 14:55:57 0d92cad902ba Pulling fs layer 14:55:57 dcc0c3b2850c Pulling fs layer 14:55:57 eb7cda286a15 Pulling fs layer 14:55:57 0d92cad902ba Waiting 14:55:57 dcc0c3b2850c Waiting 14:55:57 684be6598fc9 Waiting 14:55:57 5e06c6bed798 Waiting 14:55:58 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:58 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:58 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:58 da9db072f522 Pulling fs layer 14:55:58 56aca8a42329 Pulling fs layer 14:55:58 fbe227156a9a Pulling fs layer 14:55:58 b56567b07821 Pulling fs layer 14:55:58 f243361b999b Pulling fs layer 14:55:58 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:58 7abf0dc59d35 Pulling fs layer 14:55:58 56aca8a42329 Waiting 14:55:58 fbe227156a9a Waiting 14:55:58 b56567b07821 Waiting 14:55:58 991de477d40a Pulling fs layer 14:55:58 5efc16ba9cdc Pulling fs layer 14:55:58 f243361b999b Waiting 14:55:58 5efc16ba9cdc Waiting 14:55:58 991de477d40a Waiting 14:55:58 795b910b71c0 Downloading [> ] 31.67kB/2.323MB 14:55:58 f18232174bc9 Pulling fs layer 14:55:58 65babbe3dfe5 Pulling fs layer 14:55:58 651b0ba49b07 Pulling fs layer 14:55:58 d953cde4314b Pulling fs layer 14:55:58 aecd4cb03450 Pulling fs layer 14:55:58 13fa68ca8757 Pulling fs layer 14:55:58 f836d47fdc4d Pulling fs layer 14:55:58 8b5292c940e1 Pulling fs layer 14:55:58 454a4350d439 Pulling fs layer 14:55:58 9a8c18aee5ea Pulling fs layer 14:55:58 13fa68ca8757 Waiting 14:55:58 f18232174bc9 Waiting 14:55:58 65babbe3dfe5 Waiting 14:55:58 8b5292c940e1 Waiting 14:55:58 d953cde4314b Waiting 14:55:58 651b0ba49b07 Waiting 14:55:58 454a4350d439 Waiting 14:55:58 9a8c18aee5ea Waiting 14:55:58 aecd4cb03450 Waiting 14:55:58 1e017ebebdbd Pulling fs layer 14:55:58 55f2b468da67 Pulling fs layer 14:55:58 82bfc142787e Pulling fs layer 14:55:58 46baca71a4ef Pulling fs layer 14:55:58 b0e0ef7895f4 Pulling fs layer 14:55:58 c0c90eeb8aca Pulling fs layer 14:55:58 5cfb27c10ea5 Pulling fs layer 14:55:58 40a5eed61bb0 Pulling fs layer 14:55:58 e040ea11fa10 Pulling fs layer 14:55:58 09d5a3f70313 Pulling fs layer 14:55:58 356f5c2c843b Pulling fs layer 14:55:58 46baca71a4ef Waiting 14:55:58 b0e0ef7895f4 Waiting 14:55:58 09d5a3f70313 Waiting 14:55:58 c0c90eeb8aca Waiting 14:55:58 5cfb27c10ea5 Waiting 14:55:58 356f5c2c843b Waiting 14:55:58 40a5eed61bb0 Waiting 14:55:58 55f2b468da67 Waiting 14:55:58 82bfc142787e Waiting 14:55:58 1e017ebebdbd Waiting 14:55:58 2d429b9e73a6 Pulling fs layer 14:55:58 46eab5b44a35 Pulling fs layer 14:55:58 c4d302cc468d Pulling fs layer 14:55:58 01e0882c90d9 Pulling fs layer 14:55:58 531ee2cf3c0c Pulling fs layer 14:55:58 ed54a7dee1d8 Pulling fs layer 14:55:58 12c5c803443f Pulling fs layer 14:55:58 e27c75a98748 Pulling fs layer 14:55:58 2d429b9e73a6 Waiting 14:55:58 e73cb4a42719 Pulling fs layer 14:55:58 531ee2cf3c0c Waiting 14:55:58 46eab5b44a35 Waiting 14:55:58 a83b68436f09 Pulling fs layer 14:55:58 ed54a7dee1d8 Waiting 14:55:58 787d6bee9571 Pulling fs layer 14:55:58 01e0882c90d9 Waiting 14:55:58 13ff0988aaea Pulling fs layer 14:55:58 c4d302cc468d Waiting 14:55:58 4b82842ab819 Pulling fs layer 14:55:58 12c5c803443f Waiting 14:55:58 e27c75a98748 Waiting 14:55:58 7e568a0dc8fb Pulling fs layer 14:55:58 787d6bee9571 Waiting 14:55:58 a83b68436f09 Waiting 14:55:58 13ff0988aaea Waiting 14:55:58 4b82842ab819 Waiting 14:55:58 e73cb4a42719 Waiting 14:55:58 eca0188f477e Pulling fs layer 14:55:58 e444bcd4d577 Pulling fs layer 14:55:58 eabd8714fec9 Pulling fs layer 14:55:58 45fd2fec8a19 Pulling fs layer 14:55:58 8f10199ed94b Pulling fs layer 14:55:58 f963a77d2726 Pulling fs layer 14:55:58 f3a82e9f1761 Pulling fs layer 14:55:58 79161a3f5362 Pulling fs layer 14:55:58 9c266ba63f51 Pulling fs layer 14:55:58 2e8a7df9c2ee Pulling fs layer 14:55:58 10f05dd8b1db Pulling fs layer 14:55:58 41dac8b43ba6 Pulling fs layer 14:55:58 71a9f6a9ab4d Pulling fs layer 14:55:58 da3ed5db7103 Pulling fs layer 14:55:58 c955f6e31a04 Pulling fs layer 14:55:58 f3a82e9f1761 Waiting 14:55:58 79161a3f5362 Waiting 14:55:58 9c266ba63f51 Waiting 14:55:58 2e8a7df9c2ee Waiting 14:55:58 10f05dd8b1db Waiting 14:55:58 41dac8b43ba6 Waiting 14:55:58 71a9f6a9ab4d Waiting 14:55:58 da3ed5db7103 Waiting 14:55:58 c955f6e31a04 Waiting 14:55:58 e444bcd4d577 Waiting 14:55:58 eca0188f477e Waiting 14:55:58 eabd8714fec9 Waiting 14:55:58 45fd2fec8a19 Waiting 14:55:58 8f10199ed94b Waiting 14:55:58 f963a77d2726 Waiting 14:55:58 9fa9226be034 Pulling fs layer 14:55:58 1617e25568b2 Pulling fs layer 14:55:58 6ac0e4adf315 Pulling fs layer 14:55:58 f3b09c502777 Pulling fs layer 14:55:58 408012a7b118 Pulling fs layer 14:55:58 44986281b8b9 Pulling fs layer 14:55:58 bf70c5107ab5 Pulling fs layer 14:55:58 1ccde423731d Pulling fs layer 14:55:58 7221d93db8a9 Pulling fs layer 14:55:58 7df673c7455d Pulling fs layer 14:55:58 9fa9226be034 Waiting 14:55:58 1617e25568b2 Waiting 14:55:58 6ac0e4adf315 Waiting 14:55:58 f3b09c502777 Waiting 14:55:58 1ccde423731d Waiting 14:55:58 7221d93db8a9 Waiting 14:55:58 44986281b8b9 Waiting 14:55:58 7df673c7455d Waiting 14:55:58 bf70c5107ab5 Waiting 14:55:58 da9db072f522 Download complete 14:55:58 da9db072f522 Download complete 14:55:58 da9db072f522 Download complete 14:55:58 da9db072f522 Download complete 14:55:58 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:58 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:58 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:58 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:58 795b910b71c0 Downloading [==================================================>] 2.323MB/2.323MB 14:55:58 795b910b71c0 Verifying Checksum 14:55:58 795b910b71c0 Download complete 14:55:58 0444d3911dbb Downloading [==================================================>] 1.2kB/1.2kB 14:55:58 0444d3911dbb Download complete 14:55:58 d1bdb495a7aa Downloading [> ] 539.6kB/58.78MB 14:55:58 b801adf990e2 Downloading [==================================================>] 1.17kB/1.17kB 14:55:58 b801adf990e2 Verifying Checksum 14:55:58 b801adf990e2 Download complete 14:55:58 e5d7009d9e55 Downloading [==================================================>] 295B/295B 14:55:58 e5d7009d9e55 Verifying Checksum 14:55:58 e5d7009d9e55 Download complete 14:55:58 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 14:55:58 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 14:55:58 1ec5fb03eaee Download complete 14:55:58 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 14:55:58 d3165a332ae3 Verifying Checksum 14:55:58 d3165a332ae3 Download complete 14:55:58 da9db072f522 Extracting [====================> ] 1.507MB/3.624MB 14:55:58 da9db072f522 Extracting [====================> ] 1.507MB/3.624MB 14:55:58 da9db072f522 Extracting [====================> ] 1.507MB/3.624MB 14:55:58 da9db072f522 Extracting [====================> ] 1.507MB/3.624MB 14:55:58 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 14:55:58 d1bdb495a7aa Downloading [===========> ] 12.98MB/58.78MB 14:55:58 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:58 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:58 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:58 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:58 da9db072f522 Pull complete 14:55:58 da9db072f522 Pull complete 14:55:58 da9db072f522 Pull complete 14:55:58 da9db072f522 Pull complete 14:55:58 c124ba1a8b26 Downloading [====> ] 8.65MB/91.87MB 14:55:58 d1bdb495a7aa Downloading [=======================> ] 28.11MB/58.78MB 14:55:58 c124ba1a8b26 Downloading [==========> ] 18.38MB/91.87MB 14:55:58 d1bdb495a7aa Downloading [=====================================> ] 44.33MB/58.78MB 14:55:58 c124ba1a8b26 Downloading [==================> ] 34.6MB/91.87MB 14:55:58 d1bdb495a7aa Verifying Checksum 14:55:58 d1bdb495a7aa Download complete 14:55:58 6394804c2196 Download complete 14:55:58 5e06c6bed798 Downloading [==================================================>] 296B/296B 14:55:58 5e06c6bed798 Verifying Checksum 14:55:58 5e06c6bed798 Download complete 14:55:58 c124ba1a8b26 Downloading [=========================> ] 47.04MB/91.87MB 14:55:58 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 14:55:58 684be6598fc9 Download complete 14:55:58 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 14:55:58 0d92cad902ba Download complete 14:55:58 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:55:58 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:55:58 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:55:58 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 14:55:58 c124ba1a8b26 Downloading [==================================> ] 62.72MB/91.87MB 14:55:58 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 14:55:58 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 14:55:58 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 14:55:58 dcc0c3b2850c Downloading [=====> ] 8.65MB/76.12MB 14:55:58 c124ba1a8b26 Downloading [===========================================> ] 79.48MB/91.87MB 14:55:58 c124ba1a8b26 Verifying Checksum 14:55:58 c124ba1a8b26 Download complete 14:55:58 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 14:55:58 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 14:55:58 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 14:55:58 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 14:55:58 eb7cda286a15 Verifying Checksum 14:55:58 eb7cda286a15 Download complete 14:55:58 dcc0c3b2850c Downloading [==========> ] 16.22MB/76.12MB 14:55:58 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 14:55:58 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 14:55:58 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 14:55:58 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 14:55:58 dcc0c3b2850c Downloading [==================> ] 28.65MB/76.12MB 14:55:59 56aca8a42329 Downloading [==> ] 3.243MB/71.91MB 14:55:59 96e38c8865ba Downloading [==================================> ] 50.28MB/71.91MB 14:55:59 96e38c8865ba Downloading [==================================> ] 50.28MB/71.91MB 14:55:59 96e38c8865ba Downloading [==================================> ] 50.28MB/71.91MB 14:55:59 dcc0c3b2850c Downloading [=========================> ] 39.47MB/76.12MB 14:55:59 56aca8a42329 Downloading [====> ] 5.946MB/71.91MB 14:55:59 96e38c8865ba Downloading [==============================================> ] 66.5MB/71.91MB 14:55:59 96e38c8865ba Downloading [==============================================> ] 66.5MB/71.91MB 14:55:59 96e38c8865ba Downloading [==============================================> ] 66.5MB/71.91MB 14:55:59 dcc0c3b2850c Downloading [==================================> ] 52.44MB/76.12MB 14:55:59 96e38c8865ba Verifying Checksum 14:55:59 96e38c8865ba Download complete 14:55:59 96e38c8865ba Download complete 14:55:59 96e38c8865ba Download complete 14:55:59 56aca8a42329 Downloading [=======> ] 10.27MB/71.91MB 14:55:59 fbe227156a9a Downloading [> ] 146.4kB/14.63MB 14:55:59 dcc0c3b2850c Downloading [===========================================> ] 66.5MB/76.12MB 14:55:59 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:55:59 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:55:59 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:55:59 dcc0c3b2850c Verifying Checksum 14:55:59 dcc0c3b2850c Download complete 14:55:59 56aca8a42329 Downloading [============> ] 18.38MB/71.91MB 14:55:59 fbe227156a9a Downloading [============> ] 3.538MB/14.63MB 14:55:59 b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB 14:55:59 b56567b07821 Verifying Checksum 14:55:59 b56567b07821 Download complete 14:55:59 f243361b999b Downloading [============================> ] 3.003kB/5.242kB 14:55:59 f243361b999b Downloading [==================================================>] 5.242kB/5.242kB 14:55:59 f243361b999b Download complete 14:55:59 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 14:55:59 7abf0dc59d35 Download complete 14:55:59 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 14:55:59 991de477d40a Verifying Checksum 14:55:59 991de477d40a Download complete 14:55:59 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 14:55:59 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 14:55:59 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 14:55:59 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 14:55:59 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 14:55:59 5efc16ba9cdc Verifying Checksum 14:55:59 5efc16ba9cdc Download complete 14:55:59 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 14:55:59 56aca8a42329 Downloading [=====================> ] 30.28MB/71.91MB 14:55:59 fbe227156a9a Downloading [================================================> ] 14.3MB/14.63MB 14:55:59 fbe227156a9a Verifying Checksum 14:55:59 fbe227156a9a Download complete 14:55:59 65babbe3dfe5 Downloading [==================================================>] 141B/141B 14:55:59 65babbe3dfe5 Verifying Checksum 14:55:59 65babbe3dfe5 Download complete 14:55:59 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB 14:55:59 f18232174bc9 Verifying Checksum 14:55:59 f18232174bc9 Download complete 14:55:59 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 14:55:59 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 14:55:59 651b0ba49b07 Verifying Checksum 14:55:59 651b0ba49b07 Download complete 14:55:59 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 14:55:59 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 14:55:59 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 14:55:59 d953cde4314b Downloading [> ] 97.22kB/8.735MB 14:55:59 aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB 14:55:59 aecd4cb03450 Downloading [==================================================>] 58.08kB/58.08kB 14:55:59 aecd4cb03450 Verifying Checksum 14:55:59 aecd4cb03450 Download complete 14:55:59 56aca8a42329 Downloading [===============================> ] 44.87MB/71.91MB 14:55:59 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB 14:55:59 13fa68ca8757 Downloading [==================================================>] 27.77kB/27.77kB 14:55:59 13fa68ca8757 Verifying Checksum 14:55:59 13fa68ca8757 Download complete 14:55:59 f836d47fdc4d Downloading [> ] 539.6kB/107.3MB 14:55:59 f18232174bc9 Extracting [==========> ] 786.4kB/3.642MB 14:55:59 d953cde4314b Downloading [=======================================> ] 6.978MB/8.735MB 14:55:59 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 14:55:59 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 14:55:59 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 14:55:59 56aca8a42329 Downloading [=========================================> ] 59.47MB/71.91MB 14:55:59 d953cde4314b Verifying Checksum 14:55:59 d953cde4314b Download complete 14:55:59 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 14:55:59 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 14:55:59 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB 14:55:59 f836d47fdc4d Downloading [===> ] 7.568MB/107.3MB 14:55:59 f18232174bc9 Pull complete 14:55:59 65babbe3dfe5 Extracting [==================================================>] 141B/141B 14:55:59 56aca8a42329 Verifying Checksum 14:55:59 56aca8a42329 Download complete 14:55:59 65babbe3dfe5 Extracting [==================================================>] 141B/141B 14:55:59 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 14:55:59 454a4350d439 Downloading [==================================================>] 11.93kB/11.93kB 14:55:59 454a4350d439 Verifying Checksum 14:55:59 454a4350d439 Download complete 14:55:59 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 14:55:59 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 14:55:59 96e38c8865ba Extracting [==============> ] 20.61MB/71.91MB 14:55:59 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 14:55:59 9a8c18aee5ea Verifying Checksum 14:55:59 9a8c18aee5ea Download complete 14:55:59 8b5292c940e1 Downloading [==> ] 3.784MB/63.48MB 14:55:59 f836d47fdc4d Downloading [=========> ] 20MB/107.3MB 14:55:59 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 14:55:59 56aca8a42329 Extracting [> ] 557.1kB/71.91MB 14:55:59 65babbe3dfe5 Pull complete 14:55:59 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 14:55:59 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 14:55:59 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 14:55:59 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 14:55:59 f836d47fdc4d Downloading [================> ] 34.6MB/107.3MB 14:55:59 8b5292c940e1 Downloading [========> ] 11.35MB/63.48MB 14:55:59 1e017ebebdbd Downloading [=======> ] 5.275MB/37.19MB 14:55:59 56aca8a42329 Extracting [==> ] 3.342MB/71.91MB 14:55:59 651b0ba49b07 Extracting [===========> ] 786.4kB/3.524MB 14:55:59 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 14:55:59 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 14:55:59 96e38c8865ba Extracting [====================> ] 29.52MB/71.91MB 14:55:59 f836d47fdc4d Downloading [======================> ] 48.66MB/107.3MB 14:55:59 8b5292c940e1 Downloading [================> ] 21.09MB/63.48MB 14:55:59 1e017ebebdbd Downloading [===================> ] 14.32MB/37.19MB 14:56:00 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 14:56:00 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 14:56:00 56aca8a42329 Extracting [====> ] 6.685MB/71.91MB 14:56:00 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 14:56:00 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 14:56:00 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 14:56:00 f836d47fdc4d Downloading [=============================> ] 63.8MB/107.3MB 14:56:00 651b0ba49b07 Pull complete 14:56:00 d953cde4314b Extracting [> ] 98.3kB/8.735MB 14:56:00 8b5292c940e1 Downloading [=======================> ] 30.28MB/63.48MB 14:56:00 1e017ebebdbd Downloading [==============================> ] 22.99MB/37.19MB 14:56:00 56aca8a42329 Extracting [========> ] 11.7MB/71.91MB 14:56:00 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 14:56:00 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 14:56:00 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 14:56:00 f836d47fdc4d Downloading [====================================> ] 78.94MB/107.3MB 14:56:00 8b5292c940e1 Downloading [================================> ] 41.09MB/63.48MB 14:56:00 1e017ebebdbd Downloading [============================================> ] 33.16MB/37.19MB 14:56:00 d953cde4314b Extracting [==> ] 393.2kB/8.735MB 14:56:00 1e017ebebdbd Verifying Checksum 14:56:00 1e017ebebdbd Download complete 14:56:00 56aca8a42329 Extracting [==========> ] 15.6MB/71.91MB 14:56:00 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 14:56:00 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 14:56:00 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 14:56:00 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 14:56:00 f836d47fdc4d Downloading [===========================================> ] 94.08MB/107.3MB 14:56:00 8b5292c940e1 Downloading [==========================================> ] 54.07MB/63.48MB 14:56:00 d953cde4314b Extracting [==========================> ] 4.62MB/8.735MB 14:56:00 56aca8a42329 Extracting [=============> ] 18.94MB/71.91MB 14:56:00 8b5292c940e1 Verifying Checksum 14:56:00 8b5292c940e1 Download complete 14:56:00 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 14:56:00 55f2b468da67 Downloading [=> ] 6.487MB/257.9MB 14:56:00 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 14:56:00 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 14:56:00 96e38c8865ba Extracting [================================> ] 46.24MB/71.91MB 14:56:00 f836d47fdc4d Verifying Checksum 14:56:00 f836d47fdc4d Download complete 14:56:00 82bfc142787e Downloading [> ] 97.22kB/8.613MB 14:56:00 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 14:56:00 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 14:56:00 46baca71a4ef Verifying Checksum 14:56:00 46baca71a4ef Download complete 14:56:00 d953cde4314b Extracting [======================================> ] 6.783MB/8.735MB 14:56:00 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 14:56:00 1e017ebebdbd Extracting [=====> ] 4.325MB/37.19MB 14:56:00 55f2b468da67 Downloading [===> ] 18.38MB/257.9MB 14:56:00 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 14:56:00 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 14:56:00 96e38c8865ba Extracting [==================================> ] 50.14MB/71.91MB 14:56:00 82bfc142787e Downloading [===============================> ] 5.504MB/8.613MB 14:56:00 56aca8a42329 Extracting [===============> ] 22.84MB/71.91MB 14:56:00 d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB 14:56:00 b0e0ef7895f4 Downloading [======> ] 4.521MB/37.01MB 14:56:00 82bfc142787e Verifying Checksum 14:56:00 82bfc142787e Download complete 14:56:00 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 14:56:00 c0c90eeb8aca Verifying Checksum 14:56:00 c0c90eeb8aca Download complete 14:56:00 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 14:56:00 5cfb27c10ea5 Download complete 14:56:00 1e017ebebdbd Extracting [===========> ] 8.258MB/37.19MB 14:56:00 55f2b468da67 Downloading [======> ] 33.52MB/257.9MB 14:56:00 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 14:56:00 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 14:56:00 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 14:56:00 56aca8a42329 Extracting [==================> ] 26.74MB/71.91MB 14:56:00 40a5eed61bb0 Downloading [==================================================>] 98B/98B 14:56:00 40a5eed61bb0 Verifying Checksum 14:56:00 40a5eed61bb0 Download complete 14:56:00 e040ea11fa10 Downloading [==================================================>] 173B/173B 14:56:00 e040ea11fa10 Download complete 14:56:00 b0e0ef7895f4 Downloading [=============> ] 10.17MB/37.01MB 14:56:00 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 14:56:00 55f2b468da67 Downloading [=========> ] 50.28MB/257.9MB 14:56:00 1e017ebebdbd Extracting [================> ] 12.58MB/37.19MB 14:56:00 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 14:56:00 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 14:56:00 96e38c8865ba Extracting [======================================> ] 55.71MB/71.91MB 14:56:00 56aca8a42329 Extracting [=====================> ] 30.64MB/71.91MB 14:56:00 d953cde4314b Pull complete 14:56:00 aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB 14:56:00 aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB 14:56:00 b0e0ef7895f4 Downloading [===================> ] 14.7MB/37.01MB 14:56:00 09d5a3f70313 Downloading [> ] 2.162MB/109.2MB 14:56:00 55f2b468da67 Downloading [============> ] 67.04MB/257.9MB 14:56:00 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 14:56:00 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 14:56:00 96e38c8865ba Extracting [==========================================> ] 61.28MB/71.91MB 14:56:00 1e017ebebdbd Extracting [=====================> ] 16.12MB/37.19MB 14:56:00 56aca8a42329 Extracting [========================> ] 34.54MB/71.91MB 14:56:00 aecd4cb03450 Pull complete 14:56:00 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 14:56:00 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 14:56:00 b0e0ef7895f4 Downloading [==========================> ] 19.59MB/37.01MB 14:56:00 55f2b468da67 Downloading [===============> ] 81.1MB/257.9MB 14:56:00 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 14:56:00 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 14:56:00 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 14:56:00 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 14:56:00 1e017ebebdbd Extracting [==========================> ] 19.66MB/37.19MB 14:56:00 56aca8a42329 Extracting [===========================> ] 38.99MB/71.91MB 14:56:00 b0e0ef7895f4 Downloading [=================================> ] 24.87MB/37.01MB 14:56:00 13fa68ca8757 Pull complete 14:56:00 55f2b468da67 Downloading [==================> ] 96.78MB/257.9MB 14:56:01 09d5a3f70313 Downloading [==> ] 5.406MB/109.2MB 14:56:01 1e017ebebdbd Extracting [================================> ] 23.99MB/37.19MB 14:56:01 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:56:01 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:56:01 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:56:01 56aca8a42329 Extracting [=============================> ] 42.89MB/71.91MB 14:56:01 b0e0ef7895f4 Downloading [========================================> ] 30.15MB/37.01MB 14:56:01 96e38c8865ba Pull complete 14:56:01 96e38c8865ba Pull complete 14:56:01 96e38c8865ba Pull complete 14:56:01 e5d7009d9e55 Extracting [==================================================>] 295B/295B 14:56:01 795b910b71c0 Extracting [> ] 32.77kB/2.323MB 14:56:01 5e06c6bed798 Extracting [==================================================>] 296B/296B 14:56:01 e5d7009d9e55 Extracting [==================================================>] 295B/295B 14:56:01 5e06c6bed798 Extracting [==================================================>] 296B/296B 14:56:01 55f2b468da67 Downloading [=====================> ] 111.9MB/257.9MB 14:56:01 f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 14:56:01 1e017ebebdbd Extracting [=====================================> ] 27.92MB/37.19MB 14:56:01 09d5a3f70313 Downloading [===> ] 7.568MB/109.2MB 14:56:01 56aca8a42329 Extracting [===============================> ] 45.68MB/71.91MB 14:56:01 b0e0ef7895f4 Downloading [===============================================> ] 35.42MB/37.01MB 14:56:01 b0e0ef7895f4 Verifying Checksum 14:56:01 b0e0ef7895f4 Download complete 14:56:01 55f2b468da67 Downloading [=========================> ] 129.8MB/257.9MB 14:56:01 e5d7009d9e55 Pull complete 14:56:01 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 14:56:01 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 14:56:01 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 14:56:01 f836d47fdc4d Extracting [=> ] 3.342MB/107.3MB 14:56:01 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 14:56:01 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 14:56:01 356f5c2c843b Verifying Checksum 14:56:01 356f5c2c843b Download complete 14:56:01 795b910b71c0 Extracting [=========> ] 458.8kB/2.323MB 14:56:01 795b910b71c0 Extracting [==================================================>] 2.323MB/2.323MB 14:56:01 1e017ebebdbd Extracting [============================================> ] 33.42MB/37.19MB 14:56:01 09d5a3f70313 Downloading [====> ] 10.27MB/109.2MB 14:56:01 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 14:56:01 56aca8a42329 Extracting [=================================> ] 47.91MB/71.91MB 14:56:01 795b910b71c0 Pull complete 14:56:01 5e06c6bed798 Pull complete 14:56:01 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 14:56:01 55f2b468da67 Downloading [============================> ] 144.9MB/257.9MB 14:56:01 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 14:56:01 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 14:56:01 f836d47fdc4d Extracting [==> ] 5.571MB/107.3MB 14:56:01 2d429b9e73a6 Downloading [=========> ] 5.602MB/29.13MB 14:56:01 1e017ebebdbd Extracting [===============================================> ] 35MB/37.19MB 14:56:01 09d5a3f70313 Downloading [=======> ] 17.3MB/109.2MB 14:56:01 1ec5fb03eaee Pull complete 14:56:01 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 14:56:01 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 14:56:01 56aca8a42329 Extracting [===================================> ] 50.69MB/71.91MB 14:56:01 55f2b468da67 Downloading [===============================> ] 160.6MB/257.9MB 14:56:01 d1bdb495a7aa Extracting [> ] 557.1kB/58.78MB 14:56:01 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 14:56:01 f836d47fdc4d Extracting [===> ] 8.356MB/107.3MB 14:56:01 2d429b9e73a6 Downloading [==========================> ] 15.33MB/29.13MB 14:56:01 09d5a3f70313 Downloading [============> ] 27.03MB/109.2MB 14:56:01 684be6598fc9 Pull complete 14:56:01 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 14:56:01 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 14:56:01 d3165a332ae3 Pull complete 14:56:01 56aca8a42329 Extracting [=====================================> ] 53.48MB/71.91MB 14:56:01 1e017ebebdbd Pull complete 14:56:01 55f2b468da67 Downloading [=================================> ] 174.6MB/257.9MB 14:56:01 d1bdb495a7aa Extracting [======> ] 7.242MB/58.78MB 14:56:01 f836d47fdc4d Extracting [=====> ] 11.14MB/107.3MB 14:56:01 2d429b9e73a6 Downloading [======================================> ] 22.71MB/29.13MB 14:56:01 09d5a3f70313 Downloading [==================> ] 41.09MB/109.2MB 14:56:01 56aca8a42329 Extracting [=======================================> ] 57.38MB/71.91MB 14:56:01 55f2b468da67 Downloading [====================================> ] 188.7MB/257.9MB 14:56:01 0d92cad902ba Pull complete 14:56:01 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 14:56:01 2d429b9e73a6 Verifying Checksum 14:56:01 2d429b9e73a6 Download complete 14:56:01 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 14:56:01 46eab5b44a35 Verifying Checksum 14:56:01 46eab5b44a35 Download complete 14:56:01 d1bdb495a7aa Extracting [=============> ] 15.6MB/58.78MB 14:56:01 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 14:56:01 f836d47fdc4d Extracting [======> ] 14.48MB/107.3MB 14:56:01 09d5a3f70313 Downloading [=======================> ] 51.9MB/109.2MB 14:56:01 56aca8a42329 Extracting [==========================================> ] 61.28MB/71.91MB 14:56:01 55f2b468da67 Downloading [=======================================> ] 203.3MB/257.9MB 14:56:01 c4d302cc468d Verifying Checksum 14:56:01 c4d302cc468d Download complete 14:56:01 c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB 14:56:01 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 14:56:01 d1bdb495a7aa Extracting [==================> ] 21.73MB/58.78MB 14:56:01 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 14:56:01 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 14:56:01 01e0882c90d9 Verifying Checksum 14:56:01 01e0882c90d9 Download complete 14:56:01 09d5a3f70313 Downloading [=============================> ] 64.34MB/109.2MB 14:56:01 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 14:56:01 f836d47fdc4d Extracting [=======> ] 16.71MB/107.3MB 14:56:01 56aca8a42329 Extracting [============================================> ] 63.5MB/71.91MB 14:56:01 55f2b468da67 Downloading [=========================================> ] 214.6MB/257.9MB 14:56:01 c124ba1a8b26 Extracting [========> ] 15.6MB/91.87MB 14:56:01 2d429b9e73a6 Extracting [======> ] 3.539MB/29.13MB 14:56:01 d1bdb495a7aa Extracting [=======================> ] 27.3MB/58.78MB 14:56:01 dcc0c3b2850c Extracting [=====> ] 8.913MB/76.12MB 14:56:01 09d5a3f70313 Downloading [===================================> ] 78.4MB/109.2MB 14:56:01 531ee2cf3c0c Downloading [============================> ] 4.668MB/8.066MB 14:56:01 55f2b468da67 Downloading [============================================> ] 228.2MB/257.9MB 14:56:01 531ee2cf3c0c Verifying Checksum 14:56:01 531ee2cf3c0c Download complete 14:56:01 56aca8a42329 Extracting [==============================================> ] 66.85MB/71.91MB 14:56:01 c124ba1a8b26 Extracting [===========> ] 21.73MB/91.87MB 14:56:01 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 14:56:01 f836d47fdc4d Extracting [========> ] 17.83MB/107.3MB 14:56:01 d1bdb495a7aa Extracting [===========================> ] 32.31MB/58.78MB 14:56:01 dcc0c3b2850c Extracting [=========> ] 14.48MB/76.12MB 14:56:01 2d429b9e73a6 Extracting [============> ] 7.078MB/29.13MB 14:56:01 ed54a7dee1d8 Verifying Checksum 14:56:01 ed54a7dee1d8 Download complete 14:56:01 09d5a3f70313 Downloading [==========================================> ] 92.99MB/109.2MB 14:56:01 12c5c803443f Downloading [==================================================>] 116B/116B 14:56:01 12c5c803443f Verifying Checksum 14:56:01 12c5c803443f Download complete 14:56:01 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 14:56:01 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 14:56:01 e27c75a98748 Verifying Checksum 14:56:01 e27c75a98748 Download complete 14:56:02 55f2b468da67 Downloading [==============================================> ] 242.2MB/257.9MB 14:56:02 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 14:56:02 56aca8a42329 Extracting [================================================> ] 69.63MB/71.91MB 14:56:02 f836d47fdc4d Extracting [=========> ] 20.05MB/107.3MB 14:56:02 d1bdb495a7aa Extracting [=================================> ] 38.99MB/58.78MB 14:56:02 dcc0c3b2850c Extracting [=============> ] 21.17MB/76.12MB 14:56:02 c124ba1a8b26 Extracting [===============> ] 28.97MB/91.87MB 14:56:02 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB 14:56:02 09d5a3f70313 Downloading [===============================================> ] 103.8MB/109.2MB 14:56:02 55f2b468da67 Downloading [=================================================> ] 257.4MB/257.9MB 14:56:02 56aca8a42329 Extracting [=================================================> ] 71.86MB/71.91MB 14:56:02 55f2b468da67 Verifying Checksum 14:56:02 55f2b468da67 Download complete 14:56:02 e73cb4a42719 Downloading [==> ] 4.865MB/109.1MB 14:56:02 f836d47fdc4d Extracting [===========> ] 23.95MB/107.3MB 14:56:02 d1bdb495a7aa Extracting [====================================> ] 42.89MB/58.78MB 14:56:02 dcc0c3b2850c Extracting [==================> ] 27.85MB/76.12MB 14:56:02 09d5a3f70313 Verifying Checksum 14:56:02 09d5a3f70313 Download complete 14:56:02 c124ba1a8b26 Extracting [====================> ] 37.32MB/91.87MB 14:56:02 2d429b9e73a6 Extracting [=====================> ] 12.68MB/29.13MB 14:56:02 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 14:56:02 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 14:56:02 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 14:56:02 a83b68436f09 Verifying Checksum 14:56:02 a83b68436f09 Download complete 14:56:02 787d6bee9571 Downloading [==================================================>] 127B/127B 14:56:02 787d6bee9571 Verifying Checksum 14:56:02 787d6bee9571 Download complete 14:56:02 13ff0988aaea Downloading [==================================================>] 167B/167B 14:56:02 13ff0988aaea Verifying Checksum 14:56:02 13ff0988aaea Download complete 14:56:02 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 14:56:02 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 14:56:02 4b82842ab819 Verifying Checksum 14:56:02 4b82842ab819 Download complete 14:56:02 7e568a0dc8fb Downloading [==================================================>] 184B/184B 14:56:02 7e568a0dc8fb Verifying Checksum 14:56:02 7e568a0dc8fb Download complete 14:56:02 e444bcd4d577 Downloading [==================================================>] 279B/279B 14:56:02 e444bcd4d577 Verifying Checksum 14:56:02 e444bcd4d577 Download complete 14:56:02 eca0188f477e Downloading [> ] 375.7kB/37.17MB 14:56:02 eabd8714fec9 Downloading [> ] 539.6kB/375MB 14:56:02 e73cb4a42719 Downloading [=======> ] 16.76MB/109.1MB 14:56:02 d1bdb495a7aa Extracting [==========================================> ] 50.14MB/58.78MB 14:56:02 dcc0c3b2850c Extracting [======================> ] 33.98MB/76.12MB 14:56:02 2d429b9e73a6 Extracting [==========================> ] 15.63MB/29.13MB 14:56:02 f836d47fdc4d Extracting [=============> ] 28.97MB/107.3MB 14:56:02 c124ba1a8b26 Extracting [=========================> ] 46.79MB/91.87MB 14:56:02 56aca8a42329 Pull complete 14:56:02 fbe227156a9a Extracting [> ] 163.8kB/14.63MB 14:56:02 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 14:56:02 eca0188f477e Downloading [======> ] 4.521MB/37.17MB 14:56:02 eabd8714fec9 Downloading [> ] 4.865MB/375MB 14:56:02 e73cb4a42719 Downloading [=============> ] 29.2MB/109.1MB 14:56:02 d1bdb495a7aa Extracting [=================================================> ] 58.49MB/58.78MB 14:56:02 dcc0c3b2850c Extracting [=========================> ] 38.99MB/76.12MB 14:56:02 d1bdb495a7aa Extracting [==================================================>] 58.78MB/58.78MB 14:56:02 2d429b9e73a6 Extracting [===============================> ] 18.58MB/29.13MB 14:56:02 c124ba1a8b26 Extracting [============================> ] 51.81MB/91.87MB 14:56:02 f836d47fdc4d Extracting [===============> ] 33.98MB/107.3MB 14:56:02 fbe227156a9a Extracting [=> ] 327.7kB/14.63MB 14:56:02 55f2b468da67 Extracting [=> ] 8.913MB/257.9MB 14:56:02 eca0188f477e Downloading [==============> ] 10.55MB/37.17MB 14:56:02 d1bdb495a7aa Pull complete 14:56:02 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 14:56:02 0444d3911dbb Extracting [==================================================>] 1.2kB/1.2kB 14:56:02 eabd8714fec9 Downloading [=> ] 10.81MB/375MB 14:56:02 e73cb4a42719 Downloading [===================> ] 42.17MB/109.1MB 14:56:02 dcc0c3b2850c Extracting [=============================> ] 44.56MB/76.12MB 14:56:02 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB 14:56:02 c124ba1a8b26 Extracting [================================> ] 60.16MB/91.87MB 14:56:02 f836d47fdc4d Extracting [================> ] 36.21MB/107.3MB 14:56:02 fbe227156a9a Extracting [============> ] 3.768MB/14.63MB 14:56:02 55f2b468da67 Extracting [===> ] 19.5MB/257.9MB 14:56:02 eca0188f477e Downloading [======================> ] 16.58MB/37.17MB 14:56:02 eabd8714fec9 Downloading [==> ] 18.38MB/375MB 14:56:02 e73cb4a42719 Downloading [===========================> ] 60.55MB/109.1MB 14:56:02 dcc0c3b2850c Extracting [================================> ] 49.02MB/76.12MB 14:56:02 2d429b9e73a6 Extracting [=========================================> ] 23.89MB/29.13MB 14:56:02 c124ba1a8b26 Extracting [=====================================> ] 68.52MB/91.87MB 14:56:02 0444d3911dbb Pull complete 14:56:02 f836d47fdc4d Extracting [=================> ] 38.44MB/107.3MB 14:56:02 b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 14:56:02 b801adf990e2 Extracting [==================================================>] 1.17kB/1.17kB 14:56:02 55f2b468da67 Extracting [====> ] 22.28MB/257.9MB 14:56:02 fbe227156a9a Extracting [=================> ] 5.079MB/14.63MB 14:56:02 eca0188f477e Downloading [=================================> ] 24.87MB/37.17MB 14:56:02 eabd8714fec9 Downloading [===> ] 29.74MB/375MB 14:56:02 e73cb4a42719 Downloading [==============================> ] 67.58MB/109.1MB 14:56:02 dcc0c3b2850c Extracting [===================================> ] 54.59MB/76.12MB 14:56:02 c124ba1a8b26 Extracting [========================================> ] 74.09MB/91.87MB 14:56:02 f836d47fdc4d Extracting [==================> ] 39.55MB/107.3MB 14:56:02 fbe227156a9a Extracting [====================> ] 6.062MB/14.63MB 14:56:02 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 14:56:02 eca0188f477e Downloading [=================================================> ] 36.93MB/37.17MB 14:56:02 eca0188f477e Download complete 14:56:02 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 14:56:02 45fd2fec8a19 Download complete 14:56:02 eabd8714fec9 Downloading [=====> ] 42.17MB/375MB 14:56:02 e73cb4a42719 Downloading [===================================> ] 77.86MB/109.1MB 14:56:02 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 14:56:02 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 14:56:02 dcc0c3b2850c Extracting [=======================================> ] 60.16MB/76.12MB 14:56:02 c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB 14:56:02 f836d47fdc4d Extracting [===================> ] 41.22MB/107.3MB 14:56:02 fbe227156a9a Extracting [========================> ] 7.045MB/14.63MB 14:56:02 b801adf990e2 Pull complete 14:56:02 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB 14:56:02 xacml-pdp Pulled 14:56:02 eabd8714fec9 Downloading [=======> ] 55.69MB/375MB 14:56:02 e73cb4a42719 Downloading [========================================> ] 89.21MB/109.1MB 14:56:02 8f10199ed94b Downloading [========================> ] 4.226MB/8.768MB 14:56:02 55f2b468da67 Extracting [======> ] 31.2MB/257.9MB 14:56:02 dcc0c3b2850c Extracting [============================================> ] 67.96MB/76.12MB 14:56:02 c124ba1a8b26 Extracting [=============================================> ] 83.56MB/91.87MB 14:56:02 eca0188f477e Extracting [> ] 393.2kB/37.17MB 14:56:02 f836d47fdc4d Extracting [====================> ] 44.01MB/107.3MB 14:56:02 fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB 14:56:02 eabd8714fec9 Downloading [========> ] 65.96MB/375MB 14:56:02 e73cb4a42719 Downloading [================================================> ] 106.5MB/109.1MB 14:56:02 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 14:56:02 8f10199ed94b Downloading [===============================================> ] 8.355MB/8.768MB 14:56:02 8f10199ed94b Verifying Checksum 14:56:02 8f10199ed94b Download complete 14:56:02 55f2b468da67 Extracting [=======> ] 39.55MB/257.9MB 14:56:02 dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB 14:56:02 e73cb4a42719 Verifying Checksum 14:56:02 e73cb4a42719 Download complete 14:56:03 c124ba1a8b26 Extracting [=================================================> ] 90.8MB/91.87MB 14:56:03 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 14:56:03 f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB 14:56:03 f963a77d2726 Verifying Checksum 14:56:03 f963a77d2726 Download complete 14:56:03 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 14:56:03 eca0188f477e Extracting [===> ] 2.753MB/37.17MB 14:56:03 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 14:56:03 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 14:56:03 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 14:56:03 79161a3f5362 Verifying Checksum 14:56:03 79161a3f5362 Download complete 14:56:03 f836d47fdc4d Extracting [=====================> ] 46.79MB/107.3MB 14:56:03 fbe227156a9a Extracting [====================================> ] 10.65MB/14.63MB 14:56:03 9c266ba63f51 Verifying Checksum 14:56:03 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 14:56:03 2e8a7df9c2ee Verifying Checksum 14:56:03 2e8a7df9c2ee Download complete 14:56:03 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 14:56:03 c124ba1a8b26 Pull complete 14:56:03 eabd8714fec9 Downloading [==========> ] 76.77MB/375MB 14:56:03 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 14:56:03 10f05dd8b1db Downloading [==================================================>] 98B/98B 14:56:03 10f05dd8b1db Verifying Checksum 14:56:03 10f05dd8b1db Download complete 14:56:03 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 14:56:03 41dac8b43ba6 Downloading [==================================================>] 171B/171B 14:56:03 41dac8b43ba6 Verifying Checksum 14:56:03 41dac8b43ba6 Download complete 14:56:03 55f2b468da67 Extracting [=========> ] 47.91MB/257.9MB 14:56:03 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 14:56:03 dcc0c3b2850c Pull complete 14:56:03 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 14:56:03 71a9f6a9ab4d Verifying Checksum 14:56:03 71a9f6a9ab4d Download complete 14:56:03 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 14:56:03 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 14:56:03 eca0188f477e Extracting [=======> ] 5.898MB/37.17MB 14:56:03 f3a82e9f1761 Downloading [======> ] 5.963MB/44.41MB 14:56:03 f836d47fdc4d Extracting [=======================> ] 49.58MB/107.3MB 14:56:03 fbe227156a9a Extracting [========================================> ] 11.96MB/14.63MB 14:56:03 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 14:56:03 eabd8714fec9 Downloading [============> ] 91.91MB/375MB 14:56:03 55f2b468da67 Extracting [==========> ] 54.03MB/257.9MB 14:56:03 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 14:56:03 eca0188f477e Extracting [===========> ] 8.258MB/37.17MB 14:56:03 f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 14:56:03 6394804c2196 Pull complete 14:56:03 fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB 14:56:03 pap Pulled 14:56:03 da3ed5db7103 Downloading [> ] 1.621MB/127.4MB 14:56:03 eabd8714fec9 Downloading [=============> ] 102.2MB/375MB 14:56:03 eb7cda286a15 Pull complete 14:56:03 f836d47fdc4d Extracting [========================> ] 52.36MB/107.3MB 14:56:03 api Pulled 14:56:03 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 14:56:03 55f2b468da67 Extracting [===========> ] 61.83MB/257.9MB 14:56:03 fbe227156a9a Pull complete 14:56:03 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 14:56:03 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 14:56:03 eca0188f477e Extracting [==============> ] 10.62MB/37.17MB 14:56:03 f3a82e9f1761 Downloading [===============> ] 13.76MB/44.41MB 14:56:03 da3ed5db7103 Downloading [=> ] 4.865MB/127.4MB 14:56:03 2d429b9e73a6 Pull complete 14:56:03 eabd8714fec9 Downloading [===============> ] 116.8MB/375MB 14:56:03 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 14:56:03 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 14:56:03 f836d47fdc4d Extracting [=========================> ] 55.15MB/107.3MB 14:56:03 55f2b468da67 Extracting [=============> ] 70.19MB/257.9MB 14:56:03 f3a82e9f1761 Downloading [==========================> ] 23.85MB/44.41MB 14:56:03 eca0188f477e Extracting [======================> ] 16.52MB/37.17MB 14:56:03 b56567b07821 Pull complete 14:56:03 da3ed5db7103 Downloading [====> ] 12.43MB/127.4MB 14:56:03 eabd8714fec9 Downloading [=================> ] 134.6MB/375MB 14:56:03 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 14:56:03 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 14:56:03 f836d47fdc4d Extracting [===========================> ] 59.05MB/107.3MB 14:56:03 55f2b468da67 Extracting [===============> ] 78.54MB/257.9MB 14:56:03 f3a82e9f1761 Downloading [===========================================> ] 38.99MB/44.41MB 14:56:03 eca0188f477e Extracting [============================> ] 21.23MB/37.17MB 14:56:03 46eab5b44a35 Pull complete 14:56:03 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 14:56:03 da3ed5db7103 Downloading [=========> ] 23.79MB/127.4MB 14:56:03 eabd8714fec9 Downloading [===================> ] 148.7MB/375MB 14:56:03 f3a82e9f1761 Verifying Checksum 14:56:03 f3a82e9f1761 Download complete 14:56:03 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 14:56:03 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 14:56:03 c955f6e31a04 Verifying Checksum 14:56:03 c955f6e31a04 Download complete 14:56:03 f243361b999b Pull complete 14:56:03 f836d47fdc4d Extracting [=============================> ] 63.5MB/107.3MB 14:56:03 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 14:56:03 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 14:56:03 55f2b468da67 Extracting [================> ] 87.46MB/257.9MB 14:56:03 9fa9226be034 Downloading [> ] 15.3kB/783kB 14:56:03 eca0188f477e Extracting [================================> ] 24.38MB/37.17MB 14:56:03 9fa9226be034 Download complete 14:56:03 9fa9226be034 Extracting [==> ] 32.77kB/783kB 14:56:03 c4d302cc468d Extracting [============> ] 1.114MB/4.534MB 14:56:03 da3ed5db7103 Downloading [==============> ] 37.85MB/127.4MB 14:56:03 eabd8714fec9 Downloading [=====================> ] 161.7MB/375MB 14:56:03 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 14:56:03 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 14:56:03 1617e25568b2 Download complete 14:56:03 f836d47fdc4d Extracting [===============================> ] 66.85MB/107.3MB 14:56:03 55f2b468da67 Extracting [==================> ] 94.14MB/257.9MB 14:56:03 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 14:56:03 eca0188f477e Extracting [======================================> ] 28.7MB/37.17MB 14:56:03 c4d302cc468d Extracting [==============================================> ] 4.26MB/4.534MB 14:56:03 da3ed5db7103 Downloading [====================> ] 51.9MB/127.4MB 14:56:03 eabd8714fec9 Downloading [=======================> ] 177.3MB/375MB 14:56:03 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 14:56:03 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 14:56:03 7abf0dc59d35 Pull complete 14:56:03 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 14:56:03 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:56:03 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:56:03 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 14:56:03 55f2b468da67 Extracting [===================> ] 100.8MB/257.9MB 14:56:03 f836d47fdc4d Extracting [================================> ] 69.63MB/107.3MB 14:56:03 6ac0e4adf315 Downloading [===> ] 4.865MB/62.07MB 14:56:03 c4d302cc468d Pull complete 14:56:03 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 14:56:03 eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB 14:56:03 9fa9226be034 Pull complete 14:56:03 da3ed5db7103 Downloading [==========================> ] 66.5MB/127.4MB 14:56:03 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 14:56:03 eabd8714fec9 Downloading [=========================> ] 193MB/375MB 14:56:03 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB 14:56:03 6ac0e4adf315 Downloading [==========> ] 12.43MB/62.07MB 14:56:03 f836d47fdc4d Extracting [==================================> ] 72.97MB/107.3MB 14:56:03 da3ed5db7103 Downloading [================================> ] 82.72MB/127.4MB 14:56:03 eca0188f477e Extracting [============================================> ] 33.42MB/37.17MB 14:56:03 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 14:56:03 eabd8714fec9 Downloading [===========================> ] 209.8MB/375MB 14:56:04 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 14:56:04 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 14:56:04 991de477d40a Pull complete 14:56:04 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 14:56:04 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 14:56:04 6ac0e4adf315 Downloading [================> ] 21.09MB/62.07MB 14:56:04 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 14:56:04 f836d47fdc4d Extracting [==================================> ] 74.65MB/107.3MB 14:56:04 da3ed5db7103 Downloading [=====================================> ] 95.7MB/127.4MB 14:56:04 01e0882c90d9 Pull complete 14:56:04 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 14:56:04 eabd8714fec9 Downloading [=============================> ] 219.5MB/375MB 14:56:04 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:56:04 eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 14:56:04 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:56:04 6ac0e4adf315 Downloading [============================> ] 35.14MB/62.07MB 14:56:04 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB 14:56:04 f836d47fdc4d Extracting [====================================> ] 77.43MB/107.3MB 14:56:04 da3ed5db7103 Downloading [==========================================> ] 108.7MB/127.4MB 14:56:04 eabd8714fec9 Downloading [===============================> ] 234.1MB/375MB 14:56:04 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 14:56:04 5efc16ba9cdc Pull complete 14:56:04 6ac0e4adf315 Downloading [====================================> ] 44.87MB/62.07MB 14:56:04 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 14:56:04 1617e25568b2 Pull complete 14:56:04 policy-db-migrator Pulled 14:56:04 da3ed5db7103 Downloading [==============================================> ] 118.4MB/127.4MB 14:56:04 55f2b468da67 Extracting [======================> ] 116.4MB/257.9MB 14:56:04 eabd8714fec9 Downloading [================================> ] 242.2MB/375MB 14:56:04 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 14:56:04 f836d47fdc4d Extracting [====================================> ] 79.1MB/107.3MB 14:56:04 531ee2cf3c0c Extracting [==================> ] 3.047MB/8.066MB 14:56:04 da3ed5db7103 Verifying Checksum 14:56:04 da3ed5db7103 Download complete 14:56:04 6ac0e4adf315 Downloading [===============================================> ] 58.93MB/62.07MB 14:56:04 eca0188f477e Pull complete 14:56:04 6ac0e4adf315 Verifying Checksum 14:56:04 6ac0e4adf315 Download complete 14:56:04 e444bcd4d577 Extracting [==================================================>] 279B/279B 14:56:04 55f2b468da67 Extracting [=======================> ] 121.4MB/257.9MB 14:56:04 e444bcd4d577 Extracting [==================================================>] 279B/279B 14:56:04 eabd8714fec9 Downloading [==================================> ] 257.4MB/375MB 14:56:04 f836d47fdc4d Extracting [======================================> ] 83MB/107.3MB 14:56:04 531ee2cf3c0c Extracting [===============================> ] 5.112MB/8.066MB 14:56:04 55f2b468da67 Extracting [========================> ] 125.3MB/257.9MB 14:56:04 eabd8714fec9 Downloading [====================================> ] 274.7MB/375MB 14:56:04 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 14:56:04 531ee2cf3c0c Extracting [============================================> ] 7.176MB/8.066MB 14:56:04 f836d47fdc4d Extracting [========================================> ] 86.34MB/107.3MB 14:56:04 e444bcd4d577 Pull complete 14:56:04 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 14:56:04 55f2b468da67 Extracting [=========================> ] 130.4MB/257.9MB 14:56:04 eabd8714fec9 Downloading [======================================> ] 288.2MB/375MB 14:56:04 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB 14:56:04 531ee2cf3c0c Pull complete 14:56:04 f836d47fdc4d Extracting [==========================================> ] 90.8MB/107.3MB 14:56:04 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 14:56:04 eabd8714fec9 Downloading [=========================================> ] 308.7MB/375MB 14:56:04 55f2b468da67 Extracting [==========================> ] 136.5MB/257.9MB 14:56:04 ed54a7dee1d8 Extracting [=================================================> ] 1.18MB/1.196MB 14:56:04 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 14:56:04 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB 14:56:04 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 14:56:04 f836d47fdc4d Extracting [=============================================> ] 96.93MB/107.3MB 14:56:04 ed54a7dee1d8 Pull complete 14:56:04 12c5c803443f Extracting [==================================================>] 116B/116B 14:56:04 12c5c803443f Extracting [==================================================>] 116B/116B 14:56:04 eabd8714fec9 Downloading [==========================================> ] 321.2MB/375MB 14:56:04 55f2b468da67 Extracting [===========================> ] 139.8MB/257.9MB 14:56:04 f836d47fdc4d Extracting [==============================================> ] 100.3MB/107.3MB 14:56:04 6ac0e4adf315 Extracting [========> ] 10.58MB/62.07MB 14:56:04 408012a7b118 Downloading [==================================================>] 637B/637B 14:56:04 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 14:56:04 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 14:56:04 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 14:56:04 44986281b8b9 Verifying Checksum 14:56:04 44986281b8b9 Download complete 14:56:04 bf70c5107ab5 Verifying Checksum 14:56:04 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 14:56:04 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 14:56:04 1ccde423731d Download complete 14:56:04 eabd8714fec9 Downloading [=============================================> ] 339MB/375MB 14:56:04 7221d93db8a9 Downloading [==================================================>] 100B/100B 14:56:04 7221d93db8a9 Verifying Checksum 14:56:04 7221d93db8a9 Download complete 14:56:04 7df673c7455d Downloading [==================================================>] 694B/694B 14:56:04 7df673c7455d Download complete 14:56:04 55f2b468da67 Extracting [===========================> ] 143.7MB/257.9MB 14:56:04 12c5c803443f Pull complete 14:56:04 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 14:56:04 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 14:56:04 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 14:56:04 f3b09c502777 Downloading [========> ] 9.731MB/56.52MB 14:56:05 f836d47fdc4d Extracting [================================================> ] 103.6MB/107.3MB 14:56:05 eabd8714fec9 Downloading [==============================================> ] 350.4MB/375MB 14:56:05 55f2b468da67 Extracting [============================> ] 147.6MB/257.9MB 14:56:05 f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB 14:56:05 6ac0e4adf315 Extracting [===============> ] 18.94MB/62.07MB 14:56:05 f836d47fdc4d Extracting [================================================> ] 104.7MB/107.3MB 14:56:05 eabd8714fec9 Downloading [================================================> ] 365MB/375MB 14:56:05 55f2b468da67 Extracting [=============================> ] 151MB/257.9MB 14:56:05 f3b09c502777 Downloading [===============================> ] 35.68MB/56.52MB 14:56:05 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 14:56:05 eabd8714fec9 Downloading [=================================================> ] 373.1MB/375MB 14:56:05 f836d47fdc4d Extracting [=================================================> ] 107MB/107.3MB 14:56:05 f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB 14:56:05 eabd8714fec9 Verifying Checksum 14:56:05 eabd8714fec9 Download complete 14:56:05 55f2b468da67 Extracting [=============================> ] 153.7MB/257.9MB 14:56:05 e27c75a98748 Pull complete 14:56:05 f836d47fdc4d Pull complete 14:56:05 f3b09c502777 Downloading [===========================================> ] 48.66MB/56.52MB 14:56:05 6ac0e4adf315 Extracting [======================> ] 28.41MB/62.07MB 14:56:05 f3b09c502777 Verifying Checksum 14:56:05 f3b09c502777 Download complete 14:56:05 55f2b468da67 Extracting [==============================> ] 157.6MB/257.9MB 14:56:05 eabd8714fec9 Extracting [> ] 557.1kB/375MB 14:56:05 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 14:56:05 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB 14:56:05 eabd8714fec9 Extracting [=> ] 10.03MB/375MB 14:56:05 55f2b468da67 Extracting [===============================> ] 161.5MB/257.9MB 14:56:05 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB 14:56:05 e73cb4a42719 Extracting [=> ] 3.342MB/109.1MB 14:56:05 6ac0e4adf315 Extracting [===================================> ] 43.45MB/62.07MB 14:56:05 eabd8714fec9 Extracting [==> ] 17.27MB/375MB 14:56:05 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB 14:56:05 e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB 14:56:05 6ac0e4adf315 Extracting [===========================================> ] 54.59MB/62.07MB 14:56:05 eabd8714fec9 Extracting [==> ] 21.73MB/375MB 14:56:05 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 14:56:05 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB 14:56:05 e73cb4a42719 Extracting [====> ] 9.47MB/109.1MB 14:56:05 6ac0e4adf315 Extracting [=================================================> ] 61.28MB/62.07MB 14:56:05 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 14:56:05 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 14:56:05 e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB 14:56:06 e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB 14:56:06 8b5292c940e1 Extracting [=> ] 2.228MB/63.48MB 14:56:06 e73cb4a42719 Extracting [=====> ] 12.81MB/109.1MB 14:56:06 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 14:56:06 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 14:56:06 e73cb4a42719 Extracting [=======> ] 15.6MB/109.1MB 14:56:06 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 14:56:06 eabd8714fec9 Extracting [===> ] 27.85MB/375MB 14:56:06 6ac0e4adf315 Pull complete 14:56:06 eabd8714fec9 Extracting [======> ] 45.68MB/375MB 14:56:06 e73cb4a42719 Extracting [=======> ] 16.71MB/109.1MB 14:56:06 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB 14:56:06 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 14:56:06 eabd8714fec9 Extracting [=======> ] 55.71MB/375MB 14:56:06 e73cb4a42719 Extracting [=========> ] 20.05MB/109.1MB 14:56:06 eabd8714fec9 Extracting [========> ] 61.28MB/375MB 14:56:06 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 14:56:06 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 14:56:06 eabd8714fec9 Extracting [=========> ] 67.96MB/375MB 14:56:06 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB 14:56:06 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB 14:56:06 e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 14:56:06 f3b09c502777 Extracting [=====> ] 6.128MB/56.52MB 14:56:06 eabd8714fec9 Extracting [=========> ] 69.63MB/375MB 14:56:06 55f2b468da67 Extracting [==================================> ] 176.6MB/257.9MB 14:56:06 e73cb4a42719 Extracting [===========> ] 25.62MB/109.1MB 14:56:06 f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB 14:56:07 eabd8714fec9 Extracting [==========> ] 77.99MB/375MB 14:56:07 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB 14:56:07 e73cb4a42719 Extracting [============> ] 27.85MB/109.1MB 14:56:07 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB 14:56:07 eabd8714fec9 Extracting [===========> ] 86.9MB/375MB 14:56:07 f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB 14:56:07 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB 14:56:07 e73cb4a42719 Extracting [==============> ] 31.2MB/109.1MB 14:56:07 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB 14:56:07 eabd8714fec9 Extracting [============> ] 95.26MB/375MB 14:56:07 f3b09c502777 Extracting [============> ] 13.93MB/56.52MB 14:56:07 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB 14:56:07 eabd8714fec9 Extracting [=============> ] 102.5MB/375MB 14:56:07 55f2b468da67 Extracting [====================================> ] 186.6MB/257.9MB 14:56:07 e73cb4a42719 Extracting [===============> ] 34.54MB/109.1MB 14:56:07 f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB 14:56:07 8b5292c940e1 Extracting [========> ] 11.14MB/63.48MB 14:56:07 55f2b468da67 Extracting [====================================> ] 190.5MB/257.9MB 14:56:07 eabd8714fec9 Extracting [==============> ] 107.5MB/375MB 14:56:07 e73cb4a42719 Extracting [=================> ] 37.88MB/109.1MB 14:56:07 f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 14:56:07 8b5292c940e1 Extracting [==========> ] 13.37MB/63.48MB 14:56:07 eabd8714fec9 Extracting [==============> ] 110.9MB/375MB 14:56:07 e73cb4a42719 Extracting [==================> ] 41.22MB/109.1MB 14:56:07 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB 14:56:07 f3b09c502777 Extracting [===================> ] 22.28MB/56.52MB 14:56:07 8b5292c940e1 Extracting [============> ] 16.15MB/63.48MB 14:56:07 eabd8714fec9 Extracting [===============> ] 114.8MB/375MB 14:56:07 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB 14:56:07 e73cb4a42719 Extracting [====================> ] 45.68MB/109.1MB 14:56:07 f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 14:56:07 eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 14:56:07 8b5292c940e1 Extracting [=============> ] 17.27MB/63.48MB 14:56:07 e73cb4a42719 Extracting [=======================> ] 50.69MB/109.1MB 14:56:07 f3b09c502777 Extracting [===========================> ] 31.2MB/56.52MB 14:56:07 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 14:56:07 eabd8714fec9 Extracting [================> ] 120.9MB/375MB 14:56:07 8b5292c940e1 Extracting [===============> ] 19.5MB/63.48MB 14:56:07 f3b09c502777 Extracting [===================================> ] 40.67MB/56.52MB 14:56:07 e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 14:56:07 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB 14:56:07 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 14:56:07 8b5292c940e1 Extracting [=================> ] 22.28MB/63.48MB 14:56:07 f3b09c502777 Extracting [============================================> ] 50.14MB/56.52MB 14:56:08 e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB 14:56:08 55f2b468da67 Extracting [======================================> ] 201.1MB/257.9MB 14:56:08 eabd8714fec9 Extracting [=================> ] 128.7MB/375MB 14:56:08 8b5292c940e1 Extracting [==================> ] 23.95MB/63.48MB 14:56:08 f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB 14:56:08 e73cb4a42719 Extracting [==========================> ] 57.38MB/109.1MB 14:56:08 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB 14:56:08 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 14:56:08 eabd8714fec9 Extracting [=================> ] 132.6MB/375MB 14:56:08 8b5292c940e1 Extracting [=====================> ] 26.74MB/63.48MB 14:56:08 e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 14:56:08 eabd8714fec9 Extracting [==================> ] 136.5MB/375MB 14:56:08 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 14:56:08 8b5292c940e1 Extracting [=======================> ] 30.08MB/63.48MB 14:56:08 e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 14:56:08 eabd8714fec9 Extracting [==================> ] 138.7MB/375MB 14:56:08 8b5292c940e1 Extracting [========================> ] 31.2MB/63.48MB 14:56:08 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB 14:56:08 e73cb4a42719 Extracting [==============================> ] 66.85MB/109.1MB 14:56:08 eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 14:56:08 55f2b468da67 Extracting [========================================> ] 209.5MB/257.9MB 14:56:08 8b5292c940e1 Extracting [==========================> ] 33.42MB/63.48MB 14:56:08 e73cb4a42719 Extracting [================================> ] 71.3MB/109.1MB 14:56:08 eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 14:56:08 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB 14:56:08 8b5292c940e1 Extracting [============================> ] 36.21MB/63.48MB 14:56:08 e73cb4a42719 Extracting [==================================> ] 74.65MB/109.1MB 14:56:08 eabd8714fec9 Extracting [===================> ] 149.3MB/375MB 14:56:08 e73cb4a42719 Extracting [===================================> ] 77.99MB/109.1MB 14:56:08 55f2b468da67 Extracting [=========================================> ] 213.9MB/257.9MB 14:56:08 8b5292c940e1 Extracting [==============================> ] 38.44MB/63.48MB 14:56:08 eabd8714fec9 Extracting [====================> ] 152.6MB/375MB 14:56:08 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB 14:56:08 8b5292c940e1 Extracting [================================> ] 41.22MB/63.48MB 14:56:08 e73cb4a42719 Extracting [=====================================> ] 81.89MB/109.1MB 14:56:09 eabd8714fec9 Extracting [====================> ] 156MB/375MB 14:56:09 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB 14:56:09 8b5292c940e1 Extracting [===================================> ] 44.56MB/63.48MB 14:56:09 e73cb4a42719 Extracting [=======================================> ] 86.34MB/109.1MB 14:56:09 f3b09c502777 Pull complete 14:56:09 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 14:56:09 8b5292c940e1 Extracting [====================================> ] 46.79MB/63.48MB 14:56:09 e73cb4a42719 Extracting [========================================> ] 89.13MB/109.1MB 14:56:09 eabd8714fec9 Extracting [=====================> ] 158.2MB/375MB 14:56:09 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB 14:56:09 eabd8714fec9 Extracting [=====================> ] 162.1MB/375MB 14:56:09 e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 14:56:09 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB 14:56:09 8b5292c940e1 Extracting [======================================> ] 49.02MB/63.48MB 14:56:09 eabd8714fec9 Extracting [======================> ] 165.4MB/375MB 14:56:09 e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB 14:56:09 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 14:56:09 8b5292c940e1 Extracting [=======================================> ] 50.69MB/63.48MB 14:56:09 408012a7b118 Extracting [==================================================>] 637B/637B 14:56:09 408012a7b118 Extracting [==================================================>] 637B/637B 14:56:09 eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 14:56:09 e73cb4a42719 Extracting [============================================> ] 96.37MB/109.1MB 14:56:09 8b5292c940e1 Extracting [=========================================> ] 52.36MB/63.48MB 14:56:09 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB 14:56:09 eabd8714fec9 Extracting [=======================> ] 176MB/375MB 14:56:09 e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 14:56:09 8b5292c940e1 Extracting [===========================================> ] 55.15MB/63.48MB 14:56:09 eabd8714fec9 Extracting [=========================> ] 188.3MB/375MB 14:56:09 eabd8714fec9 Extracting [===========================> ] 204.4MB/375MB 14:56:10 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 14:56:10 e73cb4a42719 Extracting [==============================================> ] 101.9MB/109.1MB 14:56:10 eabd8714fec9 Extracting [============================> ] 216.1MB/375MB 14:56:10 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 14:56:10 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB 14:56:10 eabd8714fec9 Extracting [=============================> ] 218.4MB/375MB 14:56:10 e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 14:56:10 408012a7b118 Pull complete 14:56:10 55f2b468da67 Extracting [=============================================> ] 236.7MB/257.9MB 14:56:10 eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 14:56:10 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB 14:56:10 e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 14:56:10 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 14:56:10 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 14:56:10 eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB 14:56:10 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB 14:56:10 8b5292c940e1 Extracting [=================================================> ] 62.39MB/63.48MB 14:56:10 e73cb4a42719 Extracting [================================================> ] 106.4MB/109.1MB 14:56:10 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 14:56:10 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 14:56:10 eabd8714fec9 Extracting [==============================> ] 225.1MB/375MB 14:56:10 eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB 14:56:10 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 14:56:10 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 14:56:10 eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB 14:56:10 55f2b468da67 Extracting [===============================================> ] 245.1MB/257.9MB 14:56:10 e73cb4a42719 Extracting [=================================================> ] 108.1MB/109.1MB 14:56:10 44986281b8b9 Pull complete 14:56:10 8b5292c940e1 Pull complete 14:56:10 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 14:56:10 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 14:56:10 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 14:56:10 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 14:56:10 55f2b468da67 Extracting [================================================> ] 252.3MB/257.9MB 14:56:10 eabd8714fec9 Extracting [===============================> ] 236.7MB/375MB 14:56:10 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 14:56:11 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 14:56:11 eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB 14:56:11 55f2b468da67 Extracting [=================================================> ] 257.4MB/257.9MB 14:56:11 eabd8714fec9 Extracting [================================> ] 242.3MB/375MB 14:56:11 bf70c5107ab5 Pull complete 14:56:11 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 14:56:11 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 14:56:11 454a4350d439 Pull complete 14:56:11 e73cb4a42719 Pull complete 14:56:11 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 14:56:11 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 14:56:11 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 14:56:11 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 14:56:11 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 14:56:11 eabd8714fec9 Extracting [================================> ] 245.1MB/375MB 14:56:11 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 14:56:11 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 14:56:11 eabd8714fec9 Extracting [================================> ] 246.2MB/375MB 14:56:11 eabd8714fec9 Extracting [=================================> ] 251.2MB/375MB 14:56:11 eabd8714fec9 Extracting [==================================> ] 256.2MB/375MB 14:56:11 eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB 14:56:11 eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB 14:56:12 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 14:56:12 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 14:56:12 eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 14:56:12 eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 14:56:12 eabd8714fec9 Extracting [====================================> ] 276.9MB/375MB 14:56:12 eabd8714fec9 Extracting [=====================================> ] 281.9MB/375MB 14:56:12 eabd8714fec9 Extracting [======================================> ] 290.2MB/375MB 14:56:12 55f2b468da67 Pull complete 14:56:12 1ccde423731d Pull complete 14:56:12 a83b68436f09 Pull complete 14:56:12 eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB 14:56:13 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 14:56:13 9a8c18aee5ea Pull complete 14:56:13 eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 14:56:13 eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB 14:56:13 eabd8714fec9 Extracting [========================================> ] 303MB/375MB 14:56:13 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 14:56:13 82bfc142787e Extracting [> ] 98.3kB/8.613MB 14:56:14 82bfc142787e Extracting [==> ] 491.5kB/8.613MB 14:56:14 eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB 14:56:14 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 14:56:14 eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 14:56:14 eabd8714fec9 Extracting [=========================================> ] 312.5MB/375MB 14:56:14 eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB 14:56:14 eabd8714fec9 Extracting [==========================================> ] 318.6MB/375MB 14:56:14 eabd8714fec9 Extracting [==========================================> ] 322MB/375MB 14:56:14 7221d93db8a9 Extracting [==================================================>] 100B/100B 14:56:14 7221d93db8a9 Extracting [==================================================>] 100B/100B 14:56:14 eabd8714fec9 Extracting [===========================================> ] 324.2MB/375MB 14:56:14 eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 14:56:15 eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB 14:56:15 eabd8714fec9 Extracting [============================================> ] 332MB/375MB 14:56:15 eabd8714fec9 Extracting [============================================> ] 334.2MB/375MB 14:56:15 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 14:56:15 eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB 14:56:15 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 14:56:15 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 14:56:15 eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 14:56:16 eabd8714fec9 Extracting [==============================================> ] 348.2MB/375MB 14:56:16 eabd8714fec9 Extracting [===============================================> ] 353.7MB/375MB 14:56:16 82bfc142787e Pull complete 14:56:16 787d6bee9571 Extracting [==================================================>] 127B/127B 14:56:16 787d6bee9571 Extracting [==================================================>] 127B/127B 14:56:16 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 14:56:16 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 14:56:16 7221d93db8a9 Pull complete 14:56:16 7df673c7455d Extracting [==================================================>] 694B/694B 14:56:16 7df673c7455d Extracting [==================================================>] 694B/694B 14:56:16 grafana Pulled 14:56:16 787d6bee9571 Pull complete 14:56:16 7df673c7455d Pull complete 14:56:16 46baca71a4ef Pull complete 14:56:16 13ff0988aaea Extracting [==================================================>] 167B/167B 14:56:16 prometheus Pulled 14:56:16 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 14:56:16 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 14:56:16 13ff0988aaea Pull complete 14:56:16 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 14:56:16 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 14:56:16 eabd8714fec9 Extracting [================================================> ] 363.2MB/375MB 14:56:16 b0e0ef7895f4 Extracting [================> ] 12.19MB/37.01MB 14:56:16 eabd8714fec9 Extracting [=================================================> ] 369.3MB/375MB 14:56:16 4b82842ab819 Pull complete 14:56:16 7e568a0dc8fb Extracting [==================================================>] 184B/184B 14:56:16 7e568a0dc8fb Extracting [==================================================>] 184B/184B 14:56:16 b0e0ef7895f4 Extracting [======================================> ] 28.31MB/37.01MB 14:56:16 eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB 14:56:16 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 14:56:16 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 14:56:16 b0e0ef7895f4 Pull complete 14:56:16 7e568a0dc8fb Pull complete 14:56:16 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 14:56:16 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 14:56:16 postgres Pulled 14:56:16 c0c90eeb8aca Pull complete 14:56:16 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 14:56:16 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 14:56:16 eabd8714fec9 Pull complete 14:56:16 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 14:56:16 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 14:56:16 5cfb27c10ea5 Pull complete 14:56:16 40a5eed61bb0 Extracting [==================================================>] 98B/98B 14:56:16 40a5eed61bb0 Extracting [==================================================>] 98B/98B 14:56:17 45fd2fec8a19 Pull complete 14:56:17 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 14:56:17 40a5eed61bb0 Pull complete 14:56:17 e040ea11fa10 Extracting [==================================================>] 173B/173B 14:56:17 e040ea11fa10 Extracting [==================================================>] 173B/173B 14:56:17 8f10199ed94b Extracting [========================> ] 4.325MB/8.768MB 14:56:17 e040ea11fa10 Pull complete 14:56:17 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 14:56:17 8f10199ed94b Pull complete 14:56:17 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 14:56:17 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 14:56:17 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 14:56:17 f963a77d2726 Pull complete 14:56:17 09d5a3f70313 Extracting [======> ] 13.37MB/109.2MB 14:56:17 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 14:56:17 09d5a3f70313 Extracting [=============> ] 28.41MB/109.2MB 14:56:17 f3a82e9f1761 Extracting [=============> ] 11.93MB/44.41MB 14:56:17 09d5a3f70313 Extracting [===================> ] 42.34MB/109.2MB 14:56:17 f3a82e9f1761 Extracting [===============================> ] 27.98MB/44.41MB 14:56:17 09d5a3f70313 Extracting [============================> ] 61.28MB/109.2MB 14:56:17 f3a82e9f1761 Extracting [============================================> ] 39.91MB/44.41MB 14:56:17 09d5a3f70313 Extracting [===================================> ] 78.54MB/109.2MB 14:56:17 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 14:56:17 f3a82e9f1761 Pull complete 14:56:17 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 14:56:17 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 14:56:17 09d5a3f70313 Extracting [===========================================> ] 94.7MB/109.2MB 14:56:17 09d5a3f70313 Extracting [================================================> ] 105.8MB/109.2MB 14:56:17 79161a3f5362 Pull complete 14:56:17 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 14:56:17 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 14:56:18 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 14:56:18 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 14:56:18 09d5a3f70313 Pull complete 14:56:18 9c266ba63f51 Pull complete 14:56:18 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 14:56:18 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 14:56:18 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 14:56:18 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 14:56:18 356f5c2c843b Pull complete 14:56:18 2e8a7df9c2ee Pull complete 14:56:19 10f05dd8b1db Extracting [==================================================>] 98B/98B 14:56:19 10f05dd8b1db Extracting [==================================================>] 98B/98B 14:56:19 kafka Pulled 14:56:19 10f05dd8b1db Pull complete 14:56:19 41dac8b43ba6 Extracting [==================================================>] 171B/171B 14:56:19 41dac8b43ba6 Extracting [==================================================>] 171B/171B 14:56:19 41dac8b43ba6 Pull complete 14:56:19 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 14:56:19 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 14:56:19 71a9f6a9ab4d Pull complete 14:56:20 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 14:56:20 da3ed5db7103 Extracting [======> ] 16.15MB/127.4MB 14:56:20 da3ed5db7103 Extracting [============> ] 31.75MB/127.4MB 14:56:20 da3ed5db7103 Extracting [==================> ] 47.91MB/127.4MB 14:56:20 da3ed5db7103 Extracting [=========================> ] 64.62MB/127.4MB 14:56:20 da3ed5db7103 Extracting [================================> ] 81.89MB/127.4MB 14:56:20 da3ed5db7103 Extracting [=======================================> ] 100.3MB/127.4MB 14:56:20 da3ed5db7103 Extracting [=============================================> ] 115.9MB/127.4MB 14:56:20 da3ed5db7103 Extracting [===============================================> ] 122MB/127.4MB 14:56:20 da3ed5db7103 Extracting [=================================================> ] 126.5MB/127.4MB 14:56:21 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 14:56:21 da3ed5db7103 Pull complete 14:56:21 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 14:56:21 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 14:56:21 c955f6e31a04 Pull complete 14:56:21 zookeeper Pulled 14:56:21 Network compose_default Creating 14:56:21 Network compose_default Created 14:56:21 Container postgres Creating 14:56:21 Container zookeeper Creating 14:56:21 Container prometheus Creating 14:56:32 Container postgres Created 14:56:32 Container prometheus Created 14:56:32 Container policy-db-migrator Creating 14:56:32 Container grafana Creating 14:56:32 Container zookeeper Created 14:56:32 Container kafka Creating 14:56:32 Container grafana Created 14:56:32 Container kafka Created 14:56:32 Container policy-db-migrator Created 14:56:32 Container policy-api Creating 14:56:32 Container policy-api Created 14:56:32 Container policy-pap Creating 14:56:32 Container policy-pap Created 14:56:32 Container policy-xacml-pdp Creating 14:56:32 Container policy-xacml-pdp Created 14:56:32 Container prometheus Starting 14:56:32 Container postgres Starting 14:56:32 Container zookeeper Starting 14:56:34 Container zookeeper Started 14:56:34 Container kafka Starting 14:56:34 Container postgres Started 14:56:34 Container policy-db-migrator Starting 14:56:35 Container policy-db-migrator Started 14:56:35 Container policy-api Starting 14:56:36 Container prometheus Started 14:56:36 Container grafana Starting 14:56:38 Container grafana Started 14:56:39 Container kafka Started 14:56:40 Container policy-api Started 14:56:40 Container policy-pap Starting 14:56:41 Container policy-pap Started 14:56:41 Container policy-xacml-pdp Starting 14:56:42 Container policy-xacml-pdp Started 14:56:42 Prometheus server: http://localhost:30259 14:56:42 Grafana server: http://localhost:30269 14:56:42 Waiting 1 minute for xacml-pdp to start... 14:57:42 Checking if REST port 30004 is open on localhost ... 14:57:42 IMAGE NAMES STATUS 14:57:42 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute 14:57:42 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 14:57:42 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 14:57:42 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 14:57:42 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 14:57:42 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 14:57:42 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 14:57:42 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 14:57:42 Cloning into '/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/csit/resources/tests/models'... 14:57:43 Building robot framework docker image 14:58:18 sha256:d2d77c24342c15d7072fec4116c160b141cdde3a3e64bb728d85b56ecee46b14 14:58:22 top - 14:58:22 up 4 min, 0 users, load average: 2.32, 1.39, 0.57 14:58:22 Tasks: 228 total, 1 running, 150 sleeping, 0 stopped, 0 zombie 14:58:22 %Cpu(s): 14.9 us, 3.3 sy, 0.0 ni, 78.3 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st 14:58:22 14:58:22 total used free shared buff/cache available 14:58:22 Mem: 31G 2.6G 21G 27M 7.1G 28G 14:58:22 Swap: 1.0G 0B 1.0G 14:58:22 14:58:22 IMAGE NAMES STATUS 14:58:22 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up About a minute 14:58:22 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up About a minute 14:58:22 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up About a minute 14:58:22 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up About a minute 14:58:22 nexus3.onap.org:10001/grafana/grafana:latest grafana Up About a minute 14:58:22 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up About a minute 14:58:22 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up About a minute 14:58:22 nexus3.onap.org:10001/library/postgres:16.4 postgres Up About a minute 14:58:22 14:58:24 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:58:24 f0a9bae87a4b policy-xacml-pdp 0.72% 171.7MiB / 31.41GiB 0.53% 43.7kB / 53.8kB 0B / 4.1kB 51 14:58:24 a15acaf3a26a policy-pap 1.25% 480.5MiB / 31.41GiB 1.49% 2.13MB / 1.06MB 0B / 139MB 68 14:58:24 dd9613f02557 policy-api 0.11% 553.3MiB / 31.41GiB 1.72% 1.14MB / 985kB 0B / 0B 57 14:58:24 455095920e36 kafka 4.13% 389.6MiB / 31.41GiB 1.21% 181kB / 170kB 0B / 639kB 83 14:58:24 5fd2387c509f grafana 0.12% 107MiB / 31.41GiB 0.33% 19.5MB / 174kB 0B / 31.1MB 19 14:58:24 93a19590a75c zookeeper 0.08% 92.48MiB / 31.41GiB 0.29% 54kB / 46.9kB 4.1kB / 557kB 63 14:58:24 9981a78d1373 prometheus 0.00% 20.6MiB / 31.41GiB 0.06% 62.4kB / 3.18kB 225kB / 0B 13 14:58:24 285c16058ed3 postgres 0.02% 86.02MiB / 31.41GiB 0.27% 2.56MB / 3.75MB 0B / 159MB 26 14:58:24 14:58:24 Container policy-csit Creating 14:58:24 Container policy-csit Created 14:58:24 Attaching to policy-csit 14:58:25 policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot 14:58:25 policy-csit | Run Robot test 14:58:25 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 14:58:25 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 14:58:25 policy-csit | -v POLICY_API_IP:policy-api:6969 14:58:25 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 14:58:25 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 14:58:25 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 14:58:25 policy-csit | -v APEX_IP:policy-apex-pdp:6969 14:58:25 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 14:58:25 policy-csit | -v KAFKA_IP:kafka:9092 14:58:25 policy-csit | -v PROMETHEUS_IP:prometheus:9090 14:58:25 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 14:58:25 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 14:58:25 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 14:58:25 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 14:58:25 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 14:58:25 policy-csit | -v TEMP_FOLDER:/tmp/distribution 14:58:25 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 14:58:25 policy-csit | -v TEST_ENV:docker 14:58:25 policy-csit | -v JAEGER_IP:jaeger:16686 14:58:25 policy-csit | Starting Robot test suites ... 14:58:25 policy-csit | ============================================================================== 14:58:25 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas 14:58:25 policy-csit | ============================================================================== 14:58:25 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test 14:58:25 policy-csit | ============================================================================== 14:58:26 policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | 14:58:26 policy-csit | ------------------------------------------------------------------------------ 14:58:26 policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | 14:58:26 policy-csit | ------------------------------------------------------------------------------ 14:58:26 policy-csit | MakeTopics :: Creates the Policy topics | PASS | 14:58:26 policy-csit | ------------------------------------------------------------------------------ 14:58:54 policy-csit | ExecuteXacmlPolicy | PASS | 14:58:54 policy-csit | ------------------------------------------------------------------------------ 14:58:54 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | 14:58:54 policy-csit | 4 tests, 4 passed, 0 failed 14:58:54 policy-csit | ============================================================================== 14:58:54 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas 14:58:54 policy-csit | ============================================================================== 14:59:54 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 14:59:54 policy-csit | ------------------------------------------------------------------------------ 14:59:54 policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | 14:59:54 policy-csit | ------------------------------------------------------------------------------ 14:59:54 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | 14:59:54 policy-csit | 2 tests, 2 passed, 0 failed 14:59:54 policy-csit | ============================================================================== 14:59:54 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | 14:59:54 policy-csit | 6 tests, 6 passed, 0 failed 14:59:54 policy-csit | ============================================================================== 14:59:54 policy-csit | Output: /tmp/results/output.xml 14:59:54 policy-csit | Log: /tmp/results/log.html 14:59:54 policy-csit | Report: /tmp/results/report.html 14:59:54 policy-csit | RESULT: 0 14:59:54 [Kpolicy-csit exited with code 0 14:59:54 IMAGE NAMES STATUS 14:59:54 nexus3.onap.org:10001/onap/policy-xacml-pdp:4.2.1-SNAPSHOT policy-xacml-pdp Up 3 minutes 14:59:54 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 14:59:54 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 14:59:54 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 14:59:54 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 14:59:54 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 14:59:54 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 14:59:54 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 14:59:54 Shut down started! 14:59:56 Collecting logs from docker compose containers... 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037412731Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-13T14:56:39Z 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037859097Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037878907Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037884447Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037888887Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037893657Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037897537Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037901707Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037906497Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037910887Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037914538Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037918618Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037922658Z level=info msg=Target target=[all] 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037932818Z level=info msg="Path Home" path=/usr/share/grafana 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037936678Z level=info msg="Path Data" path=/var/lib/grafana 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037940428Z level=info msg="Path Logs" path=/var/log/grafana 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037945378Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037950498Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 14:59:56 grafana | logger=settings t=2025-06-13T14:56:39.037956838Z level=info msg="App mode production" 14:59:56 grafana | logger=featuremgmt t=2025-06-13T14:56:39.038421534Z level=info msg=FeatureToggles correlations=true alertingInsights=true panelMonitoring=true formatString=true newDashboardSharingComponent=true pluginsDetailsRightPanel=true promQLScope=true dashboardSceneSolo=true azureMonitorEnableUserAuth=true preinstallAutoUpdate=true logsExploreTableVisualisation=true angularDeprecationUI=true recordedQueriesMulti=true dashgpt=true dataplaneFrontendFallback=true ssoSettingsSAML=true dashboardScene=true unifiedStorageSearchPermissionFiltering=true grafanaconThemes=true awsAsyncQueryCaching=true newPDFRendering=true alertingSimplifiedRouting=true lokiLabelNamesQueryApi=true transformationsRedesign=true alertRuleRestore=true newFiltersUI=true externalCorePlugins=true groupToNestedTableTransformation=true logsInfiniteScrolling=true azureMonitorPrometheusExemplars=true tlsMemcached=true kubernetesPlaylists=true pinNavItems=true alertingUIOptimizeReducer=true prometheusAzureOverrideAudience=true cloudWatchCrossAccountQuerying=true recoveryThreshold=true alertingQueryAndExpressionsStepMode=true influxdbBackendMigration=true useSessionStorageForRedirection=true logsContextDatasourceUi=true logRowsPopoverMenu=true annotationPermissionUpdate=true publicDashboardsScene=true kubernetesClientDashboardsFolders=true alertingRuleVersionHistoryRestore=true onPremToCloudMigrations=true cloudWatchNewLabelParsing=true addFieldFromCalculationStatFunctions=true alertingRuleRecoverDeleted=true prometheusUsesCombobox=true lokiQuerySplitting=true ssoSettingsApi=true cloudWatchRoundUpEndTime=true reportingUseRawTimeRange=true unifiedRequestLog=true lokiQueryHints=true alertingRulePermanentlyDelete=true logsPanelControls=true dashboardSceneForViewers=true alertingApiServer=true nestedFolders=true alertingNotificationsStepMode=true failWrongDSUID=true lokiStructuredMetadata=true 14:59:56 grafana | logger=sqlstore t=2025-06-13T14:56:39.038496635Z level=info msg="Connecting to DB" dbtype=sqlite3 14:59:56 grafana | logger=sqlstore t=2025-06-13T14:56:39.038517925Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.040421991Z level=info msg="Locking database" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.040440042Z level=info msg="Starting DB migrations" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.041437715Z level=info msg="Executing migration" id="create migration_log table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.042719423Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.281727ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.04843808Z level=info msg="Executing migration" id="create user table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.049075278Z level=info msg="Migration successfully executed" id="create user table" duration=635.099µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.054290768Z level=info msg="Executing migration" id="add unique index user.login" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.055825238Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.52141ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.062826123Z level=info msg="Executing migration" id="add unique index user.email" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.063644464Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=819.421µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.067399765Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.068134455Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=734.29µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.071580961Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.07226162Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=679.919µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.079383726Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.081386874Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.003028ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.084540566Z level=info msg="Executing migration" id="create user table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.085121484Z level=info msg="Migration successfully executed" id="create user table v2" duration=580.398µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.088285936Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.088967095Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=680.969µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.094675682Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.095787777Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.088245ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.099456607Z level=info msg="Executing migration" id="copy data_source v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.100024294Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=567.248µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.103959708Z level=info msg="Executing migration" id="Drop old table user_v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.104414094Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=454.355µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.110518596Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.111850313Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.332077ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.11605559Z level=info msg="Executing migration" id="Update user table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.116094821Z level=info msg="Migration successfully executed" id="Update user table charset" duration=40.121µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.119245833Z level=info msg="Executing migration" id="Add last_seen_at column to user" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.121052848Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.809215ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.124290131Z level=info msg="Executing migration" id="Add missing user data" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.124489004Z level=info msg="Migration successfully executed" id="Add missing user data" duration=198.673µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.129897647Z level=info msg="Executing migration" id="Add is_disabled column to user" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.131517458Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.619881ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.136542006Z level=info msg="Executing migration" id="Add index user.login/user.email" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.137692392Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.149206ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.141498833Z level=info msg="Executing migration" id="Add is_service_account column to user" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.14277799Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.278377ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.146799594Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.155466101Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.663077ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.171375546Z level=info msg="Executing migration" id="Add uid column to user" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.173321801Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.947876ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.177868653Z level=info msg="Executing migration" id="Update uid column values for users" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.178230367Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=365.284µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.18210942Z level=info msg="Executing migration" id="Add unique index user_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.18283011Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=719.83µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.187728606Z level=info msg="Executing migration" id="Add is_provisioned column to user" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.188990233Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.261407ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.192888525Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.19323297Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=344.475µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.197082552Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.197847442Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=764.14µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.202093799Z level=info msg="Executing migration" id="update login and email fields to lowercase" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.20290615Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=813.971µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.206247865Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.206825422Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=577.257µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.21027223Z level=info msg="Executing migration" id="create temp user table v1-7" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.211641438Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.369158ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.217940772Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.218718073Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=776.501µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.223311995Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.224753094Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.443559ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.228673317Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.229499158Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=826.431µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.23410136Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.235273406Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.171196ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.240103971Z level=info msg="Executing migration" id="Update temp_user table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.240144571Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=41.79µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.244272917Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.2452025Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=932.793µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.249163953Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.249955184Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=790.721µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.255350247Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.256140967Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=789.99µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.259563923Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.260419584Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=852.161µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.269454227Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.274701457Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.24731ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.278174164Z level=info msg="Executing migration" id="create temp_user v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.279249138Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.075214ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.282950749Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.28384132Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=888.531µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.286947912Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.287877244Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=928.292µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.302360789Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.303954511Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.595312ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.308217319Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.309614067Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.399208ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.313064894Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.313473299Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=408.336µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.319751514Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.320554275Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=802.651µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.327899394Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.328502382Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=602.558µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.332689998Z level=info msg="Executing migration" id="create star table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.333861964Z level=info msg="Migration successfully executed" id="create star table" duration=1.172116ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.339295427Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.340089708Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=793.451µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.343472013Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.344953284Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.480561ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.34995006Z level=info msg="Executing migration" id="Add column org_id in star" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.35139453Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.44423ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.356798133Z level=info msg="Executing migration" id="Add column updated in star" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.359311277Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.510134ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.362908955Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.364375735Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.47022ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.368253327Z level=info msg="Executing migration" id="create org table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.369126759Z level=info msg="Migration successfully executed" id="create org table v1" duration=872.762µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.372538065Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.37366097Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.121445ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.381553727Z level=info msg="Executing migration" id="create org_user table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.382391888Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=837.991µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.38633024Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.38772538Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.395599ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.391624422Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.392759348Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.135416ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.395840909Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.396718Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=876.891µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.403120486Z level=info msg="Executing migration" id="Update org table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.403280739Z level=info msg="Migration successfully executed" id="Update org table charset" duration=161.332µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.407664898Z level=info msg="Executing migration" id="Update org_user table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.40781475Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=151.282µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.411473569Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.412084398Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=609.999µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.416086661Z level=info msg="Executing migration" id="create dashboard table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.417744704Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.657153ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.430566246Z level=info msg="Executing migration" id="add index dashboard.account_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.432180988Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.615222ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.437412049Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.438512733Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.100014ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.441904749Z level=info msg="Executing migration" id="create dashboard_tag table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.442873033Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=968.414µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.448865283Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.449770835Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=905.162µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.453442784Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.454112524Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=669.9µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.458636344Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.465071212Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.430918ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.469510271Z level=info msg="Executing migration" id="create dashboard v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.471229845Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.720483ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.476350194Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.478229668Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.878794ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.487013026Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.48796168Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=947.874µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.491763511Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.492216687Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=450.836µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.497885213Z level=info msg="Executing migration" id="drop table dashboard_v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.499292952Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.409149ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.503306566Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.503331916Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=26.72µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.507889378Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.510005047Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.114939ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.515647983Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.517754621Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.086467ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.521096416Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.523087163Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.989126ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.526145464Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.52734952Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.203306ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.533335891Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.537231303Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.902212ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.543072732Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.544161997Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.088565ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.558117115Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.559804857Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.688422ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.576305699Z level=info msg="Executing migration" id="Update dashboard table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.57634932Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=45.371µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.581781483Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.581822734Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=42.671µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.588623676Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.594166251Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=5.541975ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.606221533Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.610890316Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=4.666853ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.61572666Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.617793338Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.066518ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.621524938Z level=info msg="Executing migration" id="Add column uid in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.623592796Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.066978ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.628818897Z level=info msg="Executing migration" id="Update uid column values in dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.6290613Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=242.223µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.633300248Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.635917723Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=2.621145ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.641711741Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.64319161Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.486299ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.649909602Z level=info msg="Executing migration" id="Update dashboard title length" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.650092324Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=183.423µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.654139318Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.655681519Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.540961ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.659965956Z level=info msg="Executing migration" id="create dashboard_provisioning" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.661532538Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.567932ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.685369128Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.690715951Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.346503ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.694591232Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.695265822Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=671.8µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.700525583Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.701309494Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=783.12µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.704671149Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.70551724Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=866.112µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.709072918Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.709472144Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=396.185µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.716183814Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.717238478Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.048675ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.72183861Z level=info msg="Executing migration" id="Add check_sum column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.724153711Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.315051ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.72708276Z level=info msg="Executing migration" id="Add index for dashboard_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.728031703Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=947.853µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.731548091Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.731716503Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=168.312µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.736467627Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.736649339Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=181.502µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.73963653Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.74041012Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=773.11µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.745050622Z level=info msg="Executing migration" id="Add isPublic for dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.747327143Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.276021ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.752914628Z level=info msg="Executing migration" id="Add deleted for dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.755820647Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.905269ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.759401006Z level=info msg="Executing migration" id="Add index for deleted" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.760210367Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=810.561µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.763700464Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.766102186Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.401202ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.769530332Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.771798503Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.267251ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.776822901Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.777294707Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=471.296µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.780451489Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.78273291Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.280761ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.788610019Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.789476351Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=866.412µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.803112304Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.804177619Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=1.063455ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.814585299Z level=info msg="Executing migration" id="create data_source table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.815526132Z level=info msg="Migration successfully executed" id="create data_source table" duration=940.703µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.818944238Z level=info msg="Executing migration" id="add index data_source.account_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.820037712Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.122654ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.825546107Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.826387999Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=841.282µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.83022188Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.83101548Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=793.45µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.834230743Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.834971544Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=740.701µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.842069729Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.851974423Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.906664ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.855361739Z level=info msg="Executing migration" id="create data_source table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.856444873Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.082304ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.859792438Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.860370716Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=578.088µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.866790842Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.86816479Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.376838ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.875214936Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.875974396Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=759.71µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.881742504Z level=info msg="Executing migration" id="Add column with_credentials" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.884117086Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.372763ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.889188364Z level=info msg="Executing migration" id="Add secure json data column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.891786659Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.597905ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.895827023Z level=info msg="Executing migration" id="Update data_source table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.895851764Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=25.381µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.901955436Z level=info msg="Executing migration" id="Update initial version to 1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.902150279Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=195.263µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.905589655Z level=info msg="Executing migration" id="Add read_only data column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.90967972Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.088265ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.914791129Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.915189424Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=399.055µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.919894438Z level=info msg="Executing migration" id="Update json_data with nulls" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.920183412Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=289.284µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.925302721Z level=info msg="Executing migration" id="Add uid column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.927675833Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.372582ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.940350913Z level=info msg="Executing migration" id="Update uid value" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.94085281Z level=info msg="Migration successfully executed" id="Update uid value" duration=501.617µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.946010929Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.947046564Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.036245ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.950305448Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.951742056Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.435068ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.955716061Z level=info msg="Executing migration" id="Add is_prunable column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.958373636Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.656816ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.961758912Z level=info msg="Executing migration" id="Add api_version column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.964421638Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.662396ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.970909445Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.970992806Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=84.131µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.976936236Z level=info msg="Executing migration" id="create api_key table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.977801168Z level=info msg="Migration successfully executed" id="create api_key table" duration=864.622µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.98170222Z level=info msg="Executing migration" id="add index api_key.account_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.982580552Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=877.922µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.986292472Z level=info msg="Executing migration" id="add index api_key.key" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.987143524Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=850.742µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.995858601Z level=info msg="Executing migration" id="add index api_key.account_id_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:39.997088337Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.229556ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.00100942Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.001858602Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=849.052µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.005058454Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.005917616Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=858.912µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.01145127Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.012342941Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=891.532µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.01604944Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.023374277Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.324237ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.02962642Z level=info msg="Executing migration" id="create api_key table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.030237317Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=610.737µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.033584211Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.034533225Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=948.814µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.0379888Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.038975833Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=989.113µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.045235566Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.046201209Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=965.413µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.050693949Z level=info msg="Executing migration" id="copy api_key v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.051164085Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=469.576µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.054501049Z level=info msg="Executing migration" id="Drop old table api_key_v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.055176527Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=674.648µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.074774607Z level=info msg="Executing migration" id="Update api_key table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.075679249Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=905.162µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.079336508Z level=info msg="Executing migration" id="Add expires to api_key table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.082340977Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.00363ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.085904694Z level=info msg="Executing migration" id="Add service account foreign key" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.088760222Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.857168ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.093636187Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.093857779Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=221.832µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.097853742Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.100923303Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.068821ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.104392379Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.106994734Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.601735ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.111962459Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.113252696Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.290247ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.116938355Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.117830767Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=891.822µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.121515076Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.122478178Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=959.672µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.128355766Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.129142777Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=784.271µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.132218657Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.132972807Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=753.83µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.136177139Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.137103671Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=926.082µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.141294857Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.141313837Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=19.54µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.144159775Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.144181275Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=22.38µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.14758116Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.150700641Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.118901ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.154825076Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.157828476Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.00272ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.161243381Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.161264821Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=22.47µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.167805277Z level=info msg="Executing migration" id="create quota table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.169017494Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.212177ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.173388571Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.174338755Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=952.544µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.178932445Z level=info msg="Executing migration" id="Update quota table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.178947605Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=15.7µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.183525296Z level=info msg="Executing migration" id="create plugin_setting table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.184135404Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=609.988µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.196455977Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.198316711Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.860034ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.203133325Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.206114275Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.98024ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.21102405Z level=info msg="Executing migration" id="Update plugin_setting table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.2110478Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.39µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.214876321Z level=info msg="Executing migration" id="update NULL org_id to 1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.215220145Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=342.705µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.220016509Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.230998804Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.981755ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.238606035Z level=info msg="Executing migration" id="create session table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.239377265Z level=info msg="Migration successfully executed" id="create session table" duration=772.73µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.242986872Z level=info msg="Executing migration" id="Drop old table playlist table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.243175075Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=185.513µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.247599754Z level=info msg="Executing migration" id="Drop old table playlist_item table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.247677575Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=77.981µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.250951688Z level=info msg="Executing migration" id="create playlist table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.251693748Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=741.659µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.257077959Z level=info msg="Executing migration" id="create playlist item table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.25783176Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=753.401µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.262051475Z level=info msg="Executing migration" id="Update playlist table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.262072685Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.96µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.267035381Z level=info msg="Executing migration" id="Update playlist_item table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.267068371Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=34.08µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.27155918Z level=info msg="Executing migration" id="Add playlist column created_at" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.278400211Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=6.837991ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.29042526Z level=info msg="Executing migration" id="Add playlist column updated_at" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.295142313Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.717713ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.300388912Z level=info msg="Executing migration" id="drop preferences table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.300554475Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=170.653µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.304603738Z level=info msg="Executing migration" id="drop preferences table v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.30476802Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=164.852µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.309827877Z level=info msg="Executing migration" id="create preferences table v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.31080346Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=977.702µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.323463258Z level=info msg="Executing migration" id="Update preferences table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.323509368Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=47.401µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.327676303Z level=info msg="Executing migration" id="Add column team_id in preferences" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.334571185Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=6.893691ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.338935722Z level=info msg="Executing migration" id="Update team_id column values in preferences" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.339039594Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=104.012µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.342068124Z level=info msg="Executing migration" id="Add column week_start in preferences" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.344468045Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.398851ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.349025976Z level=info msg="Executing migration" id="Add column preferences.json_data" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.35240618Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.376934ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.357951934Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.358250377Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=305.564µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.365125659Z level=info msg="Executing migration" id="Add preferences index org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.366314324Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.185015ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.37046533Z level=info msg="Executing migration" id="Add preferences index user_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.371582404Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.116814ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.375532527Z level=info msg="Executing migration" id="create alert table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.377286359Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.752792ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.384939841Z level=info msg="Executing migration" id="add index alert org_id & id " 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.386928918Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.988007ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.392615183Z level=info msg="Executing migration" id="add index alert state" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.393336282Z level=info msg="Migration successfully executed" id="add index alert state" duration=720.969µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.396585445Z level=info msg="Executing migration" id="add index alert dashboard_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.397277514Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=692.039µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.405418272Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.406528447Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.109845ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.412423265Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.414169897Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.746482ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.418236212Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.419247815Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.011663ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.42723288Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.44231781Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=15.08279ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.448115426Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.44909064Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=975.414µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.452760428Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.453833673Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.072595ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.46198637Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.462539378Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=552.848µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.468059421Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.469648102Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.59086ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.473975949Z level=info msg="Executing migration" id="create alert_notification table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.475001592Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.025253ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.480727778Z level=info msg="Executing migration" id="Add column is_default" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.484784152Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.055504ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.488995298Z level=info msg="Executing migration" id="Add column frequency" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.494156306Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.139898ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.498132838Z level=info msg="Executing migration" id="Add column send_reminder" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.502184992Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.051534ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.50885469Z level=info msg="Executing migration" id="Add column disable_resolve_message" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.512976235Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.120915ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.536441335Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.537358547Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=916.392µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.540265436Z level=info msg="Executing migration" id="Update alert table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.540286456Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=21.92µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.5443617Z level=info msg="Executing migration" id="Update alert_notification table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.544383101Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=21.701µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.549666261Z level=info msg="Executing migration" id="create notification_journal table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.550801305Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.132244ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.554813139Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.556129456Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.315607ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.57456369Z level=info msg="Executing migration" id="drop alert_notification_journal" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.57831886Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=3.75752ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.593748003Z level=info msg="Executing migration" id="create alert_notification_state table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.594498744Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=750.611µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.600813777Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.602020903Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.207166ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.607960202Z level=info msg="Executing migration" id="Add for to alert table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.612225458Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.265516ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.616782549Z level=info msg="Executing migration" id="Add column uid in alert_notification" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.620933993Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.150804ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.628396852Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.628635515Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=238.703µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.633285667Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.634149638Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=863.551µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.639840364Z level=info msg="Executing migration" id="Remove unique index org_id_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.640603574Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=763.259µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.647146821Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.65092369Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.776529ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.656489334Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.656544775Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=56.491µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.660276804Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.661212917Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=936.203µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.665781957Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.666829621Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.046654ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.677912877Z level=info msg="Executing migration" id="Drop old annotation table v4" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.67806457Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=158.513µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.682547859Z level=info msg="Executing migration" id="create annotation table v5" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.683792286Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.242997ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.699143608Z level=info msg="Executing migration" id="add index annotation 0 v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.701210676Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=2.066038ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.705959749Z level=info msg="Executing migration" id="add index annotation 1 v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.707045033Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.084684ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.710784362Z level=info msg="Executing migration" id="add index annotation 2 v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.711878067Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.093285ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.718874229Z level=info msg="Executing migration" id="add index annotation 3 v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.720061505Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.186886ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.727153529Z level=info msg="Executing migration" id="add index annotation 4 v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.7280157Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=861.771µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.734644278Z level=info msg="Executing migration" id="Update annotation table charset" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.734683159Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=40.251µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.741364567Z level=info msg="Executing migration" id="Add column region_id to annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.747305865Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.941188ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.750869613Z level=info msg="Executing migration" id="Drop category_id index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.752414413Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.54494ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.757918166Z level=info msg="Executing migration" id="Add column tags to annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.762631809Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.712443ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.767160338Z level=info msg="Executing migration" id="Create annotation_tag table v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.768303304Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.141616ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.772535929Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.773710275Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.174266ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.783078429Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.784707831Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.630022ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.790368166Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.804480862Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.110446ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.808623227Z level=info msg="Executing migration" id="Create annotation_tag table v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.809554019Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=930.422µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.825736723Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.827067901Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.336158ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.839172881Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.839654747Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=481.456µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.843388517Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.844040006Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=647.449µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.84820389Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.848419033Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=214.993µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.852531148Z level=info msg="Executing migration" id="Add created time to annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.857006687Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.474579ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.864311124Z level=info msg="Executing migration" id="Add updated time to annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.870174481Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.863367ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.873706608Z level=info msg="Executing migration" id="Add index for created in annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.874650831Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=939.963µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.879007238Z level=info msg="Executing migration" id="Add index for updated in annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.880158774Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.150926ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.886499367Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.886771701Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=271.874µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.901182951Z level=info msg="Executing migration" id="Add epoch_end column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.90860959Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.425809ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.915679883Z level=info msg="Executing migration" id="Add index for epoch_end" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.916380163Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=700.5µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.923061121Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.923417096Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=353.755µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.928569424Z level=info msg="Executing migration" id="Move region to single row" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.929498746Z level=info msg="Migration successfully executed" id="Move region to single row" duration=925.022µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.934127557Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.935670338Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.544211ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.941303873Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.942099663Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=795.49µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.955922406Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.957485386Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.56242ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.964946975Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.966237212Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.289437ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.971175257Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.972064209Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=888.662µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.975389904Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.976240385Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=850.531µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.980335638Z level=info msg="Executing migration" id="Increase tags column to length 4096" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.980352699Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=17.961µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.985658139Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.98572435Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=64.861µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.992558641Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.992589071Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=28µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.996628034Z level=info msg="Executing migration" id="create test_data table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:40.997558147Z level=info msg="Migration successfully executed" id="create test_data table" duration=929.653µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.002638644Z level=info msg="Executing migration" id="create dashboard_version table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.00383481Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.193597ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.007897069Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.009349821Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.452412ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.016252915Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.017815659Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.558174ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.021342052Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.021540215Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=201.603µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.023905981Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.024265707Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=356.896µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.026541151Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.026556761Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=16.37µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.032272238Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.037808282Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.532534ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.043183273Z level=info msg="Executing migration" id="create team table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.044363511Z level=info msg="Migration successfully executed" id="create team table" duration=1.181348ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.048645046Z level=info msg="Executing migration" id="add index team.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.04956327Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=918.324µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.055343527Z level=info msg="Executing migration" id="add unique index team_org_id_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.05681414Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.469993ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.061879537Z level=info msg="Executing migration" id="Add column uid in team" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.067550802Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.667285ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.082380627Z level=info msg="Executing migration" id="Update uid column values in team" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.082790103Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=409.586µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.089685427Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.090946366Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.261979ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.098511961Z level=info msg="Executing migration" id="Add column external_uid in team" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.106250608Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=7.736516ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.113516518Z level=info msg="Executing migration" id="Add column is_provisioned in team" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.119791183Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=6.270065ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.123907286Z level=info msg="Executing migration" id="create team member table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.124768498Z level=info msg="Migration successfully executed" id="create team member table" duration=860.813µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.129444449Z level=info msg="Executing migration" id="add index team_member.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.130561236Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.116357ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.138466056Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.139573582Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.107146ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.146182193Z level=info msg="Executing migration" id="add index team_member.team_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.147924219Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.742195ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.158402837Z level=info msg="Executing migration" id="Add column email to team table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.166885486Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.482049ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.171326673Z level=info msg="Executing migration" id="Add column external to team_member table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.18106916Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=9.740817ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.186649935Z level=info msg="Executing migration" id="Add column permission to team_member table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.191826143Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.176018ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.209576872Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.211117086Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.534413ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.215532542Z level=info msg="Executing migration" id="create dashboard acl table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.216921393Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.384491ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.220527218Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.22137745Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=849.992µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.226597599Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.228131643Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.533614ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.231452163Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.233003947Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.550564ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.237526635Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.238388899Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=861.834µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.246566322Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.248115516Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.544943ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.251809191Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.252795337Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=985.456µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.255920414Z level=info msg="Executing migration" id="add index dashboard_permission" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.256803587Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=882.683µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.265737862Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.266173568Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=435.446µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.271411608Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.271757643Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=345.625µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.27684671Z level=info msg="Executing migration" id="create tag table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.277969148Z level=info msg="Migration successfully executed" id="create tag table" duration=1.121478ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.281260237Z level=info msg="Executing migration" id="add index tag.key_value" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.28213469Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=874.013µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.285286398Z level=info msg="Executing migration" id="create login attempt table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.28600187Z level=info msg="Migration successfully executed" id="create login attempt table" duration=717.612µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.290852612Z level=info msg="Executing migration" id="add index login_attempt.username" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.291800347Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=947.265µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.295029136Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.295886309Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=856.773µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.29991652Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.314134715Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.216525ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.319312803Z level=info msg="Executing migration" id="create login_attempt v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.319865582Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=552.669µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.334061107Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.335047632Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=988.445µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.34092215Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.341234825Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=312.525µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.345386738Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.345992267Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=602.069µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.354600838Z level=info msg="Executing migration" id="create user auth table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.357789756Z level=info msg="Migration successfully executed" id="create user auth table" duration=3.182388ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.370198414Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.371387252Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.188148ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.378264685Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.378288727Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=25.332µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.383991983Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.391986743Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.99158ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.396195607Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.401107312Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.910795ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.409293286Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.422874351Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=13.579955ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.426632908Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.431242897Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.609079ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.436891023Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.437776887Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=886.464µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.444889244Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.450744663Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.857139ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.488635187Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.494621927Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.98442ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.504196012Z level=info msg="Executing migration" id="create server_lock table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.505649554Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.451472ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.512815903Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.513846768Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.030325ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.521126908Z level=info msg="Executing migration" id="create user auth token table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.522318957Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.194919ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.536051845Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.537132841Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.082116ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.54958555Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.551893334Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.308174ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.559473579Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.560591686Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.118157ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.565987468Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.574042929Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.050861ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.58663857Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.587753117Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.115457ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.595638197Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.605062619Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=9.420093ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.613935974Z level=info msg="Executing migration" id="create cache_data table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.614824017Z level=info msg="Migration successfully executed" id="create cache_data table" duration=888.103µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.621764862Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.622715997Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=951.465µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.627367297Z level=info msg="Executing migration" id="create short_url table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.628764098Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.395952ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.632769058Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.633709353Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=940.035µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.641153715Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.641187276Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=35.761µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.64676781Z level=info msg="Executing migration" id="delete alert_definition table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.646881652Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=114.492µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.650030911Z level=info msg="Executing migration" id="recreate alert_definition table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.65132866Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.29615ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.657118567Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.659742147Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.62451ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.666459688Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.667562906Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.102738ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.67184923Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.67187481Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=27.43µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.678352489Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.679964973Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.612464ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.683859333Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.685366325Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.507072ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.690198098Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.691977055Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.779657ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.699108413Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.70027503Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.166907ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.703723833Z level=info msg="Executing migration" id="Add column paused in alert_definition" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.713535171Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.808948ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.722173442Z level=info msg="Executing migration" id="drop alert_definition table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.723197108Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.025536ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.736122083Z level=info msg="Executing migration" id="delete alert_definition_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.736414057Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=291.594µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.7425011Z level=info msg="Executing migration" id="recreate alert_definition_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.744610342Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=2.108502ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.98760042Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:41.990088818Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.489298ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.362034358Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.364610534Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=2.573206ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.753345378Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.753420009Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=78.611µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.962512215Z level=info msg="Executing migration" id="drop alert_definition_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.96359578Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.085605ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.981013678Z level=info msg="Executing migration" id="create alert_instance table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.983048656Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=2.034418ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.990073116Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.991134201Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.060655ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.996337945Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:42.997889777Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.548242ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.002306049Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.00900485Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.697891ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.019031508Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.020830846Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.796908ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.032484409Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.033589485Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.104646ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.043423122Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.071393829Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.017187ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.075330897Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.098447662Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.113895ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.102484582Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.103246963Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=762.211µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.108572313Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.109300453Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=727.81µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.114224707Z level=info msg="Executing migration" id="add current_reason column related to current_state" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.120430279Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.204962ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.12715586Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.134810954Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.654214ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.139769487Z level=info msg="Executing migration" id="create alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.140815264Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.050377ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.152918574Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.155399151Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.479937ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.162207642Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.163287608Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.079976ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.167335838Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.168397025Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.060277ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.171854616Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.171876886Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=22.96µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.175416349Z level=info msg="Executing migration" id="add column for to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.181976026Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.559007ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.189271295Z level=info msg="Executing migration" id="add column annotations to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.200867108Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=11.595833ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.206203248Z level=info msg="Executing migration" id="add column labels to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.214155486Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.951398ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.21845352Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.219322963Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=869.103µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.224027513Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.225003288Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=974.895µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.230978047Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.23724126Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.262783ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.242305806Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.25133038Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=9.024924ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.257687264Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.258627809Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=942.675µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.264573217Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.271808225Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.233988ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.283612341Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.294597285Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=10.988724ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.299218583Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.299245674Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=28.571µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.303494038Z level=info msg="Executing migration" id="create alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.305304494Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.809917ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.31239363Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.313601317Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.207397ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.318265277Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.320177396Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.909289ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.324293397Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.324323817Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=37.37µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.331053178Z level=info msg="Executing migration" id="add column for to alert_rule_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.338296296Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.246398ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.344094762Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.350811993Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.717581ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.354751701Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.363136276Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.379265ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.374235501Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.386622545Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=12.385574ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.392069087Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.399146713Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.076586ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.419934262Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.420024353Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=91.781µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.427352293Z level=info msg="Executing migration" id=create_alert_configuration_table 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.429053879Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.701506ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.436630701Z level=info msg="Executing migration" id="Add column default in alert_configuration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.447195698Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.545647ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.450580329Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.450617119Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=36.61µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.458396205Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.467214797Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.821322ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.472607197Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.474574157Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.96566ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.483055333Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.49224813Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=9.195067ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.499541069Z level=info msg="Executing migration" id=create_ngalert_configuration_table 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.500716766Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.174527ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.505597569Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.507579319Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.98149ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.512507141Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.524622022Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=12.104441ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.559142806Z level=info msg="Executing migration" id="create provenance_type table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.561014665Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.872009ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.567344759Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.569381509Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.0359ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.58759308Z level=info msg="Executing migration" id="create alert_image table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.588477613Z level=info msg="Migration successfully executed" id="create alert_image table" duration=886.803µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.59427934Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.595025841Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=746.361µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.598492723Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.598506403Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=13.87µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.604593253Z level=info msg="Executing migration" id=create_alert_configuration_history_table 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.605313524Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=720.201µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.610172897Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.610920538Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=745.8µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.618039194Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.61846101Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.624294667Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.624869365Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=574.198µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.630302156Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.631896961Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.594135ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.636196905Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.644442337Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.245522ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.649644385Z level=info msg="Executing migration" id="create library_element table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.650715061Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.070206ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.654615859Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.655692025Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.075676ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.660146461Z level=info msg="Executing migration" id="create library_element_connection table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.661831357Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.684166ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.674877571Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.676962592Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.080191ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.683164144Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.684405573Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.241769ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.690602225Z level=info msg="Executing migration" id="increase max description length to 2048" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.690667066Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=65.471µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.69695216Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.69698008Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=29.18µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.700700976Z level=info msg="Executing migration" id="add library_element folder uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.710901658Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.200632ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.719149401Z level=info msg="Executing migration" id="populate library_element folder_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.719684718Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=534.578µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.723355004Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.724779324Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.42323ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.729552686Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.729992122Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=439.006µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.735662697Z level=info msg="Executing migration" id="create data_keys table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.736941995Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.278628ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.746384467Z level=info msg="Executing migration" id="create secrets table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.74801265Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.628073ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.751198869Z level=info msg="Executing migration" id="rename data_keys name column to id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.78486225Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.662591ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.800102947Z level=info msg="Executing migration" id="add name column into data_keys" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.811625038Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=11.522391ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.816173027Z level=info msg="Executing migration" id="copy data_keys id column values into name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.816326709Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=153.192µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.819617688Z level=info msg="Executing migration" id="rename data_keys name column to label" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.852429467Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.812619ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.859098886Z level=info msg="Executing migration" id="rename data_keys id column back to name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.891668252Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.569015ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.895369766Z level=info msg="Executing migration" id="create kv_store table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.896385071Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.015165ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.903276564Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.90438641Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.109896ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.913445625Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.913919843Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=472.968µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.930092853Z level=info msg="Executing migration" id="create permission table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.931341582Z level=info msg="Migration successfully executed" id="create permission table" duration=1.250279ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.938389067Z level=info msg="Executing migration" id="add unique index permission.role_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.940404917Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.01887ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.945353421Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.946439227Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.085676ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.950157813Z level=info msg="Executing migration" id="create role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.951160738Z level=info msg="Migration successfully executed" id="create role table" duration=1.002445ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.956493907Z level=info msg="Executing migration" id="add column display_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.96474356Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.248953ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.969258127Z level=info msg="Executing migration" id="add column group_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.97680843Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.550303ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.979731423Z level=info msg="Executing migration" id="add index role.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.981705063Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.97303ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.987218855Z level=info msg="Executing migration" id="add unique index role_org_id_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.988441673Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.222078ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.992427263Z level=info msg="Executing migration" id="add index role_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.993524759Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.097186ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.996738057Z level=info msg="Executing migration" id="create team role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:43.997762042Z level=info msg="Migration successfully executed" id="create team role table" duration=1.025365ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.004932393Z level=info msg="Executing migration" id="add index team_role.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.007107644Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.174431ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.013986021Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.015524638Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.538767ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.019531525Z level=info msg="Executing migration" id="add index team_role.team_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.020758556Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.226871ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.025846673Z level=info msg="Executing migration" id="create user role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.026829119Z level=info msg="Migration successfully executed" id="create user role table" duration=982.816µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.030094614Z level=info msg="Executing migration" id="add index user_role.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.032022497Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.926503ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.039574945Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.041556118Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.983003ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.05820469Z level=info msg="Executing migration" id="add index user_role.user_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.060271495Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.068705ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.067902145Z level=info msg="Executing migration" id="create builtin role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.068802329Z level=info msg="Migration successfully executed" id="create builtin role table" duration=899.874µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.077889203Z level=info msg="Executing migration" id="add index builtin_role.role_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.07947086Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.581327ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.083995126Z level=info msg="Executing migration" id="add index builtin_role.name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.085050155Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.054709ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.089460819Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.098169286Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.707777ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.10253073Z level=info msg="Executing migration" id="add index builtin_role.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.103255582Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=724.472µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.107538205Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.108378499Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=835.464µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.112050471Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.113552217Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.501106ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.118148545Z level=info msg="Executing migration" id="add unique index role.uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.119206873Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.058178ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.128638832Z level=info msg="Executing migration" id="create seed assignment table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.129797922Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.15885ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.136260911Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.13737052Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.109309ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.141173475Z level=info msg="Executing migration" id="add column hidden to role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.150719416Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.546561ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.155160241Z level=info msg="Executing migration" id="permission kind migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.163436912Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.275211ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.167355998Z level=info msg="Executing migration" id="permission attribute migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.17340095Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.043783ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.179939551Z level=info msg="Executing migration" id="permission identifier migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.190232255Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.291184ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.197897015Z level=info msg="Executing migration" id="add permission identifier index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.199044284Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.145669ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.203627682Z level=info msg="Executing migration" id="add permission action scope role_id index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.205340081Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.71209ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.211409333Z level=info msg="Executing migration" id="remove permission role_id action scope index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.212574123Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.16448ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.21889018Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.230532087Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=11.644227ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.234062527Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.23486996Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=810.373µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.239969867Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.241103825Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.131918ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.2454965Z level=info msg="Executing migration" id="create query_history table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.247534015Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=2.036835ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.252473059Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.25437894Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.901581ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.258675394Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.258707764Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=34.06µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.264589084Z level=info msg="Executing migration" id="create query_history_details table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.265752743Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.163499ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.269171011Z level=info msg="Executing migration" id="rbac disabled migrator" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.269270963Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=56.64µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.272750052Z level=info msg="Executing migration" id="teams permissions migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.27326021Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=510.188µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.278173634Z level=info msg="Executing migration" id="dashboard permissions" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.278872165Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=699.651µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.28509168Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.285867423Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=775.703µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.289657058Z level=info msg="Executing migration" id="drop managed folder create actions" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.290133846Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=475.698µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.30867175Z level=info msg="Executing migration" id="alerting notification permissions" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.309589345Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=918.385µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.315080028Z level=info msg="Executing migration" id="create query_history_star table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.316070805Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=990.487µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.319970051Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.321262553Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.292212ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.326141746Z level=info msg="Executing migration" id="add column org_id in query_history_star" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.338684057Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=12.542032ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.345602205Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.345627025Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=26.53µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.351252981Z level=info msg="Executing migration" id="create correlation table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.352797366Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.544025ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.361648816Z level=info msg="Executing migration" id="add index correlations.uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.363937335Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.290489ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.371666386Z level=info msg="Executing migration" id="add index correlations.source_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.373424996Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.76046ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.377365932Z level=info msg="Executing migration" id="add correlation config column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.390839581Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.464938ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.398155455Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.399440216Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.285371ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.403113898Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.404528042Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.413864ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.409219481Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.431246794Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.022563ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.436801588Z level=info msg="Executing migration" id="create correlation v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.437665103Z level=info msg="Migration successfully executed" id="create correlation v2" duration=863.015µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.443587774Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.444801714Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.21342ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.447826345Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.449030196Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.206121ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.456005693Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.45754881Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.542927ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.461587098Z level=info msg="Executing migration" id="copy correlation v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.461952474Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=364.966µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.4687884Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.469744536Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=955.826µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.480966286Z level=info msg="Executing migration" id="add provisioning column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.49060335Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.637714ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.494006807Z level=info msg="Executing migration" id="add type column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.50185073Z level=info msg="Migration successfully executed" id="add type column" duration=7.842673ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.505896078Z level=info msg="Executing migration" id="create entity_events table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.50660985Z level=info msg="Migration successfully executed" id="create entity_events table" duration=714.822µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.511090446Z level=info msg="Executing migration" id="create dashboard public config v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.511884239Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=793.483µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.517767389Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.518346428Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.525344517Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.526218162Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.535261015Z level=info msg="Executing migration" id="Drop old dashboard public config table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.536957594Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.695289ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.545136642Z level=info msg="Executing migration" id="recreate dashboard public config v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.546419474Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.283071ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.565805282Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.568773512Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.96807ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.584732852Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.587356057Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.622365ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.596084464Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.597272475Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.188531ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.606100264Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.607520848Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.450704ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.616885557Z level=info msg="Executing migration" id="Drop public config table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.618090637Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.20336ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.62650498Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.627995765Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.489306ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.633785573Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.635057604Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.271711ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.639174304Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.640404165Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.229341ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.647122548Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.64838624Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.263132ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.651871909Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.676438205Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.565526ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.689838792Z level=info msg="Executing migration" id="add annotations_enabled column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.70094201Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.103098ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.706200668Z level=info msg="Executing migration" id="add time_selection_enabled column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.712457575Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.256417ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.721580369Z level=info msg="Executing migration" id="delete orphaned public dashboards" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.721796522Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=215.503µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.727445548Z level=info msg="Executing migration" id="add share column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.736495271Z level=info msg="Migration successfully executed" id="add share column" duration=9.045143ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.746242127Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.74645562Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=212.024µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.751200741Z level=info msg="Executing migration" id="create file table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.752249968Z level=info msg="Migration successfully executed" id="create file table" duration=1.028387ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.756620402Z level=info msg="Executing migration" id="file table idx: path natural pk" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.757726511Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.10584ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.762953649Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.764748439Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.79446ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.770030769Z level=info msg="Executing migration" id="create file_meta table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.770820913Z level=info msg="Migration successfully executed" id="create file_meta table" duration=789.664µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.779437599Z level=info msg="Executing migration" id="file table idx: path key" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.780252512Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=814.533µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.783735021Z level=info msg="Executing migration" id="set path collation in file table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.783749961Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=15.31µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.789766883Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.789785883Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=20.26µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.7955173Z level=info msg="Executing migration" id="managed permissions migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.796378885Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=861.015µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.812875085Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.813372953Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=497.958µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.81970798Z level=info msg="Executing migration" id="RBAC action name migrator" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.821124493Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.416463ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.830385061Z level=info msg="Executing migration" id="Add UID column to playlist" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.840376319Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.984548ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.844737474Z level=info msg="Executing migration" id="Update uid column values in playlist" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.845012408Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=275.014µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.85512401Z level=info msg="Executing migration" id="Add index for uid in playlist" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.857471179Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.349119ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.863216717Z level=info msg="Executing migration" id="update group index for alert rules" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.863621153Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=409.756µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.869832098Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.870065052Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=232.994µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.875299861Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.876080724Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=780.193µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.883923857Z level=info msg="Executing migration" id="add action column to seed_assignment" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.895886819Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.939802ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.901489004Z level=info msg="Executing migration" id="add scope column to seed_assignment" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.911019396Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.525622ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.914306251Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.915261937Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=957.966µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:44.918915289Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.00104437Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=82.1235ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.010060772Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.011020188Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=964.196µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.020276365Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.022255168Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.978273ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.026578232Z level=info msg="Executing migration" id="add primary key to seed_assigment" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.058516342Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=31.93567ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.069845074Z level=info msg="Executing migration" id="add origin column to seed_assignment" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.076517347Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.673063ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.083440404Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.083906612Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=466.788µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.088828365Z level=info msg="Executing migration" id="prevent seeding OnCall access" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.08910337Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=277.105µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.094363269Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.094745175Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=382.166µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.10443769Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.104760525Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=322.425µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.111837384Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.11217101Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=333.586µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.121691432Z level=info msg="Executing migration" id="create folder table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.122942832Z level=info msg="Migration successfully executed" id="create folder table" duration=1.25296ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.12751862Z level=info msg="Executing migration" id="Add index for parent_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.129360822Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.842202ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.136153526Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.13698274Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=829.044µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.143124204Z level=info msg="Executing migration" id="Update folder title length" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.143144404Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.32µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.146287798Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.148387233Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.096215ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.155100897Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.156177955Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.074168ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.164420205Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.165546663Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.125988ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.169248197Z level=info msg="Executing migration" id="Sync dashboard and folder table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.169685024Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=436.177µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.173088351Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.173347536Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=259.275µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.187365383Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.191167038Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=3.802885ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.194917341Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.195858897Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=941.316µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.201755176Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.202903046Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.14156ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.2090359Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.211206677Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.172287ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.215309096Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.216543117Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.233721ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.221260057Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.222562589Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.301433ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.226423134Z level=info msg="Executing migration" id="create anon_device table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.227422832Z level=info msg="Migration successfully executed" id="create anon_device table" duration=996.618µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.234615263Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.237324949Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.708806ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.243096386Z level=info msg="Executing migration" id="add index anon_device.updated_at" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.244152225Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.055359ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.24800784Z level=info msg="Executing migration" id="create signing_key table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.248863734Z level=info msg="Migration successfully executed" id="create signing_key table" duration=855.424µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.259091997Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.263019843Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=3.923746ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.267647292Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.26988732Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.239738ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.274722181Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.275378133Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=658.092µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.281605738Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.29350669Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.903242ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.330271872Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.33131223Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.043108ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.339922666Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.339956196Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=36.86µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.348391089Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.349767992Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.374373ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.380817907Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.380842608Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=26.021µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.392077948Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.394251025Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.172477ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.401661281Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.403116265Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.454784ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.411213452Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.413230486Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.016664ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.419866148Z level=info msg="Executing migration" id="create sso_setting table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.421845452Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.978684ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.42586909Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.426669914Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=801.404µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.432538323Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.433012621Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=475.608µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.437969105Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.438968072Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=998.127µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.44244332Z level=info msg="Executing migration" id="create cloud_migration table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.443365267Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=918.287µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.456453748Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.457493626Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.042248ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.463187662Z level=info msg="Executing migration" id="add stack_id column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.474461693Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.266441ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.480390553Z level=info msg="Executing migration" id="add region_slug column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.490108177Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.715294ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.497252819Z level=info msg="Executing migration" id="add cluster_slug column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.51090963Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=13.651501ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.51620895Z level=info msg="Executing migration" id="add migration uid column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.52566919Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.458709ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.531228564Z level=info msg="Executing migration" id="Update uid column values for migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.531578989Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=350.295µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.547787394Z level=info msg="Executing migration" id="Add unique index migration_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.549253018Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.487804ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.554925744Z level=info msg="Executing migration" id="add migration run uid column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.563698844Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=8.77327ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.588201698Z level=info msg="Executing migration" id="Update uid column values for migration run" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.588790698Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=593.22µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.596709452Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.597993044Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.283332ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.603303764Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.628964028Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=25.660844ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.633170289Z level=info msg="Executing migration" id="create cloud_migration_session v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.633926232Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=755.643µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.640175568Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.641074393Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=898.805µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.646928842Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.647605854Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=676.322µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.656735998Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.658392286Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.651878ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.670339039Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.694974135Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=24.635176ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.698468695Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.699313549Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=844.324µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.715477563Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.718409092Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=2.910379ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.725927589Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.726313786Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=385.797µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.729800315Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.730744861Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=941.766µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.738501512Z level=info msg="Executing migration" id="add snapshot upload_url column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.751862198Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=13.360736ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.758192845Z level=info msg="Executing migration" id="add snapshot status column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.768822286Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=10.62944ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.773475784Z level=info msg="Executing migration" id="add snapshot local_directory column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.783125078Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=9.648283ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.789950703Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.800548922Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=10.597409ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.808173152Z level=info msg="Executing migration" id="add snapshot encryption_key column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.815855442Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=7.68165ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.822228289Z level=info msg="Executing migration" id="add snapshot error_string column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.829291758Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=7.058619ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.845241119Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.847621619Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=2.38126ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.854721729Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.892642251Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=37.915322ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.896419185Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.905825385Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.4023ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.917443961Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.929071788Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=11.627547ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.933973731Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.944108982Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.133421ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.954649331Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.964250834Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.587962ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.982720706Z level=info msg="Executing migration" id="increase resource_uid column length" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.982753196Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=35.9µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.991015496Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:45.991037957Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=24.051µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.001760288Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.015295098Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.537259ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.030048827Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.043293791Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=13.246564ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.053562795Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.053939222Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=375.917µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.060027755Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.060411701Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=384.116µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.073844289Z level=info msg="Executing migration" id="add record column to alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.088022299Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=14.17704ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.11885194Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.129071553Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=10.218413ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.138834759Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.146338906Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=7.504877ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.1531195Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.163087959Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=9.968329ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.175720603Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.176292982Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=574.959µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.182704741Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.189845562Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=7.140751ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.197565773Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.20450273Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=6.928736ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.218808812Z level=info msg="Executing migration" id="delete orphaned service account permissions" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.219188789Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=380.047µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.244908174Z level=info msg="Executing migration" id="adding action set permissions" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.245517414Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=610.78µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.256922487Z level=info msg="Executing migration" id="create user_external_session table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.258761339Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.838602ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.270191362Z level=info msg="Executing migration" id="increase name_id column length to 1024" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.270221403Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=34.201µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.285622703Z level=info msg="Executing migration" id="increase session_id column length to 1024" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.285669354Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=47.831µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.294494653Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.295195255Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=700.052µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.30557807Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.317553094Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.975704ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.330732986Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.343815978Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=13.083082ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.354029541Z level=info msg="Executing migration" id="add alert_rule_state table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.355830561Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.80045ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.378610737Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.380885675Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.275808ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.38589195Z level=info msg="Executing migration" id="add guid column to alert_rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.396318917Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.426427ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.403174373Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.41485171Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=11.676407ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.420537046Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.420558467Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.420837072Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.420852033Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=315.477µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.425864917Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.426502328Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=637.141µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.431751656Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.4337433Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.991154ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.437925991Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.439362415Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.437114ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.445772773Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.447005775Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.232572ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.450483084Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.451710205Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.223932ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.458013701Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.471352987Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=13.340036ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.478088681Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.487178585Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.088724ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.495385404Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.507357976Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=11.971772ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.510867046Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.520105582Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.237486ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.526258236Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.526447859Z level=info msg="Removed 0 datasources:drilldown permissions" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.526461289Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=203.433µs 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.531656858Z level=info msg="Executing migration" id="remove title in folder unique index" 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.533624351Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.963274ms 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.538398212Z level=info msg="migrations completed" performed=654 skipped=0 duration=7.497013238s 14:59:56 grafana | logger=migrator t=2025-06-13T14:56:46.539084484Z level=info msg="Unlocking database" 14:59:56 grafana | logger=sqlstore t=2025-06-13T14:56:46.554520434Z level=info msg="Created default admin" user=admin 14:59:56 grafana | logger=sqlstore t=2025-06-13T14:56:46.554719878Z level=info msg="Created default organization" 14:59:56 grafana | logger=secrets t=2025-06-13T14:56:46.560595537Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 14:59:56 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:56:46.651079769Z level=info msg="Restored cache from database" duration=747.703µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.662716856Z level=info msg="Locking database" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.662738447Z level=info msg="Starting DB migrations" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.670335945Z level=info msg="Executing migration" id="create resource_migration_log table" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.671252311Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=915.816µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.679038103Z level=info msg="Executing migration" id="Initialize resource tables" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.679126124Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=89.531µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.685513222Z level=info msg="Executing migration" id="drop table resource" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.685617944Z level=info msg="Migration successfully executed" id="drop table resource" duration=105.372µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.689412758Z level=info msg="Executing migration" id="create table resource" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.690519497Z level=info msg="Migration successfully executed" id="create table resource" duration=1.10575ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.695049333Z level=info msg="Executing migration" id="create table resource, index: 0" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.696395326Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.344853ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.701846409Z level=info msg="Executing migration" id="drop table resource_history" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.702092793Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=246.514µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.709732312Z level=info msg="Executing migration" id="create table resource_history" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.710973413Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.240021ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.716036269Z level=info msg="Executing migration" id="create table resource_history, index: 0" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.717726308Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.687299ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.724876118Z level=info msg="Executing migration" id="create table resource_history, index: 1" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.727290619Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=2.413391ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.733338981Z level=info msg="Executing migration" id="drop table resource_version" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.733459133Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=123.032µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.748660701Z level=info msg="Executing migration" id="create table resource_version" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.750337149Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.677358ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.756659816Z level=info msg="Executing migration" id="create table resource_version, index: 0" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.759060687Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=2.401721ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.764939956Z level=info msg="Executing migration" id="drop table resource_blob" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.765027737Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=88.281µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.769618766Z level=info msg="Executing migration" id="create table resource_blob" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.770750214Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.128609ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.7751764Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.7764247Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.24776ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.784552458Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.787462217Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.906879ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.7934941Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.805177508Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.682678ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.809293847Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.822577692Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=13.284035ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.830991014Z level=info msg="Executing migration" id="Add index to resource_history for polling" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.833607769Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=2.616655ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.838820327Z level=info msg="Executing migration" id="Add index to resource for loading" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.840133369Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.312862ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.843795502Z level=info msg="Executing migration" id="Add column folder in resource_history" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.854684515Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.888733ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.859256413Z level=info msg="Executing migration" id="Add column folder in resource" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.869991415Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.733662ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.876460294Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 14:59:56 grafana | logger=deletion-marker-migrator t=2025-06-13T14:56:46.876503065Z level=info msg="finding any deletion markers" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.876987253Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=526.499µs 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.881293106Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.883817119Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.523693ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.888991147Z level=info msg="Executing migration" id="Add generation to resource history" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.901455257Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.464571ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.910625942Z level=info msg="Executing migration" id="Add generation index to resource history" 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.912964762Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.33798ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.919078655Z level=info msg="migrations completed" performed=26 skipped=0 duration=248.78428ms 14:59:56 grafana | logger=resource-migrator t=2025-06-13T14:56:46.919887789Z level=info msg="Unlocking database" 14:59:56 grafana | t=2025-06-13T14:56:46.920181834Z level=info caller=logger.go:214 time=2025-06-13T14:56:46.920162344Z msg="Using channel notifier" logger=sql-resource-server 14:59:56 grafana | logger=plugin.store t=2025-06-13T14:56:46.931497196Z level=info msg="Loading plugins..." 14:59:56 grafana | logger=plugins.registration t=2025-06-13T14:56:46.966231023Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 14:59:56 grafana | logger=plugins.initialization t=2025-06-13T14:56:46.966254374Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 14:59:56 grafana | logger=plugin.store t=2025-06-13T14:56:46.966283484Z level=info msg="Plugins loaded" count=53 duration=34.786948ms 14:59:56 grafana | logger=query_data t=2025-06-13T14:56:46.97070909Z level=info msg="Query Service initialization" 14:59:56 grafana | logger=live.push_http t=2025-06-13T14:56:46.97492589Z level=info msg="Live Push Gateway initialization" 14:59:56 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T14:56:46.987083387Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 14:59:56 grafana | logger=ngalert t=2025-06-13T14:56:47.005469308Z level=info msg="Using simple database alert instance store" 14:59:56 grafana | logger=ngalert.state.manager.persist t=2025-06-13T14:56:47.005514218Z level=info msg="Using sync state persister" 14:59:56 grafana | logger=infra.usagestats.collector t=2025-06-13T14:56:47.009886143Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 14:59:56 grafana | logger=ngalert.state.manager t=2025-06-13T14:56:47.01030754Z level=info msg="Warming state cache for startup" 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:47.011740453Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 14:59:56 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T14:56:47.011809466Z level=info msg="Starting MultiOrg Alertmanager" 14:59:56 grafana | logger=grafanaStorageLogger t=2025-06-13T14:56:47.011938088Z level=info msg="Storage starting" 14:59:56 grafana | logger=http.server t=2025-06-13T14:56:47.01386473Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 14:59:56 grafana | logger=ngalert.state.manager t=2025-06-13T14:56:47.092919828Z level=info msg="State cache has been initialized" states=0 duration=82.593347ms 14:59:56 grafana | logger=ngalert.scheduler t=2025-06-13T14:56:47.093005759Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 14:59:56 grafana | logger=ticker t=2025-06-13T14:56:47.093423087Z level=info msg=starting first_tick=2025-06-13T14:56:50Z 14:59:56 grafana | logger=plugins.update.checker t=2025-06-13T14:56:47.111664345Z level=info msg="Update check succeeded" duration=101.379946ms 14:59:56 grafana | logger=grafana.update.checker t=2025-06-13T14:56:47.119083191Z level=info msg="Update check succeeded" duration=108.843943ms 14:59:56 grafana | logger=sqlstore.transactions t=2025-06-13T14:56:47.138348247Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 14:59:56 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:56:47.144238617Z level=info msg="Patterns update finished" duration=131.339793ms 14:59:56 grafana | logger=provisioning.datasources t=2025-06-13T14:56:47.228083366Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 14:59:56 grafana | logger=provisioning.alerting t=2025-06-13T14:56:47.260482514Z level=info msg="starting to provision alerting" 14:59:56 grafana | logger=provisioning.alerting t=2025-06-13T14:56:47.260515705Z level=info msg="finished to provision alerting" 14:59:56 grafana | logger=provisioning.dashboard t=2025-06-13T14:56:47.262783383Z level=info msg="starting to provision dashboards" 14:59:56 grafana | logger=plugin.installer t=2025-06-13T14:56:48.142496404Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.223892982Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.227487312Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.228257006Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.228799045Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.229469876Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.230646786Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.232242503Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.237718276Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 14:59:56 grafana | logger=grafana-apiserver t=2025-06-13T14:56:48.23910257Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 14:59:56 grafana | logger=app-registry t=2025-06-13T14:56:48.294162501Z level=info msg="app registry initialized" 14:59:56 grafana | logger=installer.fs t=2025-06-13T14:56:48.296567452Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 14:59:56 grafana | logger=plugins.registration t=2025-06-13T14:56:48.328859729Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.328961971Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=1.317191786s 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.329051332Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 14:59:56 grafana | logger=plugin.installer t=2025-06-13T14:56:48.522641849Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 14:59:56 grafana | logger=installer.fs t=2025-06-13T14:56:48.575393152Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 14:59:56 grafana | logger=plugins.registration t=2025-06-13T14:56:48.591403542Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.591421752Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=262.337969ms 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.591437033Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 14:59:56 grafana | logger=plugin.installer t=2025-06-13T14:56:48.760581116Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 14:59:56 grafana | logger=provisioning.dashboard t=2025-06-13T14:56:48.798372086Z level=info msg="finished to provision dashboards" 14:59:56 grafana | logger=installer.fs t=2025-06-13T14:56:48.821343854Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 14:59:56 grafana | logger=plugins.registration t=2025-06-13T14:56:48.838054357Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.838076497Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=246.634564ms 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:48.838097728Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 14:59:56 grafana | logger=plugin.installer t=2025-06-13T14:56:49.027611456Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 14:59:56 grafana | logger=installer.fs t=2025-06-13T14:56:49.087966698Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 14:59:56 grafana | logger=plugins.registration t=2025-06-13T14:56:49.106486731Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 14:59:56 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:56:49.106506131Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=268.403763ms 14:59:56 grafana | logger=infra.usagestats t=2025-06-13T14:57:26.019490994Z level=info msg="Usage stats are ready to report" 14:59:57 kafka | ===> User 14:59:57 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:59:57 kafka | ===> Configuring ... 14:59:57 kafka | Running in Zookeeper mode... 14:59:57 kafka | ===> Running preflight checks ... 14:59:57 kafka | ===> Check if /var/lib/kafka/data is writable ... 14:59:57 kafka | ===> Check if Zookeeper is healthy ... 14:59:57 kafka | [2025-06-13 14:56:43,515] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,516] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,519] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,522] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:59:57 kafka | [2025-06-13 14:56:43,526] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:59:57 kafka | [2025-06-13 14:56:43,534] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:43,551] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:43,551] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:43,559] INFO Socket connection established, initiating session, client: /172.17.0.5:56728, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:43,596] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000023f370000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:43,715] INFO Session: 0x10000023f370000 closed (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:43,715] INFO EventThread shut down for session: 0x10000023f370000 (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | Using log4j config /etc/kafka/log4j.properties 14:59:57 kafka | ===> Launching ... 14:59:57 kafka | ===> Launching kafka ... 14:59:57 kafka | [2025-06-13 14:56:44,344] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 14:59:57 kafka | [2025-06-13 14:56:44,609] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:59:57 kafka | [2025-06-13 14:56:44,685] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 14:59:57 kafka | [2025-06-13 14:56:44,687] INFO starting (kafka.server.KafkaServer) 14:59:57 kafka | [2025-06-13 14:56:44,687] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 14:59:57 kafka | [2025-06-13 14:56:44,699] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,702] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,703] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,704] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@584f54e6 (org.apache.zookeeper.ZooKeeper) 14:59:57 kafka | [2025-06-13 14:56:44,708] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:59:57 kafka | [2025-06-13 14:56:44,713] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:44,714] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 14:59:57 kafka | [2025-06-13 14:56:44,721] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:44,739] INFO Socket connection established, initiating session, client: /172.17.0.5:56730, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:44,750] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000023f370001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 14:59:57 kafka | [2025-06-13 14:56:44,757] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 14:59:57 kafka | [2025-06-13 14:56:45,099] INFO Cluster ID = d-rF8NzzQdGshpvqUU-qrg (kafka.server.KafkaServer) 14:59:57 kafka | [2025-06-13 14:56:45,103] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 14:59:57 kafka | [2025-06-13 14:56:45,153] INFO KafkaConfig values: 14:59:57 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 14:59:57 kafka | alter.config.policy.class.name = null 14:59:57 kafka | alter.log.dirs.replication.quota.window.num = 11 14:59:57 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 14:59:57 kafka | authorizer.class.name = 14:59:57 kafka | auto.create.topics.enable = true 14:59:57 kafka | auto.include.jmx.reporter = true 14:59:57 kafka | auto.leader.rebalance.enable = true 14:59:57 kafka | background.threads = 10 14:59:57 kafka | broker.heartbeat.interval.ms = 2000 14:59:57 kafka | broker.id = 1 14:59:57 kafka | broker.id.generation.enable = true 14:59:57 kafka | broker.rack = null 14:59:57 kafka | broker.session.timeout.ms = 9000 14:59:57 kafka | client.quota.callback.class = null 14:59:57 kafka | compression.type = producer 14:59:57 kafka | connection.failed.authentication.delay.ms = 100 14:59:57 kafka | connections.max.idle.ms = 600000 14:59:57 kafka | connections.max.reauth.ms = 0 14:59:57 kafka | control.plane.listener.name = null 14:59:57 kafka | controlled.shutdown.enable = true 14:59:57 kafka | controlled.shutdown.max.retries = 3 14:59:57 kafka | controlled.shutdown.retry.backoff.ms = 5000 14:59:57 kafka | controller.listener.names = null 14:59:57 kafka | controller.quorum.append.linger.ms = 25 14:59:57 kafka | controller.quorum.election.backoff.max.ms = 1000 14:59:57 kafka | controller.quorum.election.timeout.ms = 1000 14:59:57 kafka | controller.quorum.fetch.timeout.ms = 2000 14:59:57 kafka | controller.quorum.request.timeout.ms = 2000 14:59:57 kafka | controller.quorum.retry.backoff.ms = 20 14:59:57 kafka | controller.quorum.voters = [] 14:59:57 kafka | controller.quota.window.num = 11 14:59:57 kafka | controller.quota.window.size.seconds = 1 14:59:57 kafka | controller.socket.timeout.ms = 30000 14:59:57 kafka | create.topic.policy.class.name = null 14:59:57 kafka | default.replication.factor = 1 14:59:57 kafka | delegation.token.expiry.check.interval.ms = 3600000 14:59:57 kafka | delegation.token.expiry.time.ms = 86400000 14:59:57 kafka | delegation.token.master.key = null 14:59:57 kafka | delegation.token.max.lifetime.ms = 604800000 14:59:57 kafka | delegation.token.secret.key = null 14:59:57 kafka | delete.records.purgatory.purge.interval.requests = 1 14:59:57 kafka | delete.topic.enable = true 14:59:57 kafka | early.start.listeners = null 14:59:57 kafka | fetch.max.bytes = 57671680 14:59:57 kafka | fetch.purgatory.purge.interval.requests = 1000 14:59:57 kafka | group.initial.rebalance.delay.ms = 3000 14:59:57 kafka | group.max.session.timeout.ms = 1800000 14:59:57 kafka | group.max.size = 2147483647 14:59:57 kafka | group.min.session.timeout.ms = 6000 14:59:57 kafka | initial.broker.registration.timeout.ms = 60000 14:59:57 kafka | inter.broker.listener.name = PLAINTEXT 14:59:57 kafka | inter.broker.protocol.version = 3.4-IV0 14:59:57 kafka | kafka.metrics.polling.interval.secs = 10 14:59:57 kafka | kafka.metrics.reporters = [] 14:59:57 kafka | leader.imbalance.check.interval.seconds = 300 14:59:57 kafka | leader.imbalance.per.broker.percentage = 10 14:59:57 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 14:59:57 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 14:59:57 kafka | log.cleaner.backoff.ms = 15000 14:59:57 kafka | log.cleaner.dedupe.buffer.size = 134217728 14:59:57 kafka | log.cleaner.delete.retention.ms = 86400000 14:59:57 kafka | log.cleaner.enable = true 14:59:57 kafka | log.cleaner.io.buffer.load.factor = 0.9 14:59:57 kafka | log.cleaner.io.buffer.size = 524288 14:59:57 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 14:59:57 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 14:59:57 kafka | log.cleaner.min.cleanable.ratio = 0.5 14:59:57 kafka | log.cleaner.min.compaction.lag.ms = 0 14:59:57 kafka | log.cleaner.threads = 1 14:59:57 kafka | log.cleanup.policy = [delete] 14:59:57 kafka | log.dir = /tmp/kafka-logs 14:59:57 kafka | log.dirs = /var/lib/kafka/data 14:59:57 kafka | log.flush.interval.messages = 9223372036854775807 14:59:57 kafka | log.flush.interval.ms = null 14:59:57 kafka | log.flush.offset.checkpoint.interval.ms = 60000 14:59:57 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 14:59:57 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 14:59:57 kafka | log.index.interval.bytes = 4096 14:59:57 kafka | log.index.size.max.bytes = 10485760 14:59:57 kafka | log.message.downconversion.enable = true 14:59:57 kafka | log.message.format.version = 3.0-IV1 14:59:57 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 14:59:57 kafka | log.message.timestamp.type = CreateTime 14:59:57 kafka | log.preallocate = false 14:59:57 kafka | log.retention.bytes = -1 14:59:57 kafka | log.retention.check.interval.ms = 300000 14:59:57 kafka | log.retention.hours = 168 14:59:57 kafka | log.retention.minutes = null 14:59:57 kafka | log.retention.ms = null 14:59:57 kafka | log.roll.hours = 168 14:59:57 kafka | log.roll.jitter.hours = 0 14:59:57 kafka | log.roll.jitter.ms = null 14:59:57 kafka | log.roll.ms = null 14:59:57 kafka | log.segment.bytes = 1073741824 14:59:57 kafka | log.segment.delete.delay.ms = 60000 14:59:57 kafka | max.connection.creation.rate = 2147483647 14:59:57 kafka | max.connections = 2147483647 14:59:57 kafka | max.connections.per.ip = 2147483647 14:59:57 kafka | max.connections.per.ip.overrides = 14:59:57 kafka | max.incremental.fetch.session.cache.slots = 1000 14:59:57 kafka | message.max.bytes = 1048588 14:59:57 kafka | metadata.log.dir = null 14:59:57 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 14:59:57 kafka | metadata.log.max.snapshot.interval.ms = 3600000 14:59:57 kafka | metadata.log.segment.bytes = 1073741824 14:59:57 kafka | metadata.log.segment.min.bytes = 8388608 14:59:57 kafka | metadata.log.segment.ms = 604800000 14:59:57 kafka | metadata.max.idle.interval.ms = 500 14:59:57 kafka | metadata.max.retention.bytes = 104857600 14:59:57 kafka | metadata.max.retention.ms = 604800000 14:59:57 kafka | metric.reporters = [] 14:59:57 kafka | metrics.num.samples = 2 14:59:57 kafka | metrics.recording.level = INFO 14:59:57 kafka | metrics.sample.window.ms = 30000 14:59:57 kafka | min.insync.replicas = 1 14:59:57 kafka | node.id = 1 14:59:57 kafka | num.io.threads = 8 14:59:57 kafka | num.network.threads = 3 14:59:57 kafka | num.partitions = 1 14:59:57 kafka | num.recovery.threads.per.data.dir = 1 14:59:57 kafka | num.replica.alter.log.dirs.threads = null 14:59:57 kafka | num.replica.fetchers = 1 14:59:57 kafka | offset.metadata.max.bytes = 4096 14:59:57 kafka | offsets.commit.required.acks = -1 14:59:57 kafka | offsets.commit.timeout.ms = 5000 14:59:57 kafka | offsets.load.buffer.size = 5242880 14:59:57 kafka | offsets.retention.check.interval.ms = 600000 14:59:57 kafka | offsets.retention.minutes = 10080 14:59:57 kafka | offsets.topic.compression.codec = 0 14:59:57 kafka | offsets.topic.num.partitions = 50 14:59:57 kafka | offsets.topic.replication.factor = 1 14:59:57 kafka | offsets.topic.segment.bytes = 104857600 14:59:57 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 14:59:57 kafka | password.encoder.iterations = 4096 14:59:57 kafka | password.encoder.key.length = 128 14:59:57 kafka | password.encoder.keyfactory.algorithm = null 14:59:57 kafka | password.encoder.old.secret = null 14:59:57 kafka | password.encoder.secret = null 14:59:57 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 14:59:57 kafka | process.roles = [] 14:59:57 kafka | producer.id.expiration.check.interval.ms = 600000 14:59:57 kafka | producer.id.expiration.ms = 86400000 14:59:57 kafka | producer.purgatory.purge.interval.requests = 1000 14:59:57 kafka | queued.max.request.bytes = -1 14:59:57 kafka | queued.max.requests = 500 14:59:57 kafka | quota.window.num = 11 14:59:57 kafka | quota.window.size.seconds = 1 14:59:57 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 14:59:57 kafka | remote.log.manager.task.interval.ms = 30000 14:59:57 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 14:59:57 kafka | remote.log.manager.task.retry.backoff.ms = 500 14:59:57 kafka | remote.log.manager.task.retry.jitter = 0.2 14:59:57 kafka | remote.log.manager.thread.pool.size = 10 14:59:57 kafka | remote.log.metadata.manager.class.name = null 14:59:57 kafka | remote.log.metadata.manager.class.path = null 14:59:57 kafka | remote.log.metadata.manager.impl.prefix = null 14:59:57 kafka | remote.log.metadata.manager.listener.name = null 14:59:57 kafka | remote.log.reader.max.pending.tasks = 100 14:59:57 kafka | remote.log.reader.threads = 10 14:59:57 kafka | remote.log.storage.manager.class.name = null 14:59:57 kafka | remote.log.storage.manager.class.path = null 14:59:57 kafka | remote.log.storage.manager.impl.prefix = null 14:59:57 kafka | remote.log.storage.system.enable = false 14:59:57 kafka | replica.fetch.backoff.ms = 1000 14:59:57 kafka | replica.fetch.max.bytes = 1048576 14:59:57 kafka | replica.fetch.min.bytes = 1 14:59:57 kafka | replica.fetch.response.max.bytes = 10485760 14:59:57 kafka | replica.fetch.wait.max.ms = 500 14:59:57 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 14:59:57 kafka | replica.lag.time.max.ms = 30000 14:59:57 kafka | replica.selector.class = null 14:59:57 kafka | replica.socket.receive.buffer.bytes = 65536 14:59:57 kafka | replica.socket.timeout.ms = 30000 14:59:57 kafka | replication.quota.window.num = 11 14:59:57 kafka | replication.quota.window.size.seconds = 1 14:59:57 kafka | request.timeout.ms = 30000 14:59:57 kafka | reserved.broker.max.id = 1000 14:59:57 kafka | sasl.client.callback.handler.class = null 14:59:57 kafka | sasl.enabled.mechanisms = [GSSAPI] 14:59:57 kafka | sasl.jaas.config = null 14:59:57 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 kafka | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 14:59:57 kafka | sasl.kerberos.service.name = null 14:59:57 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 kafka | sasl.login.callback.handler.class = null 14:59:57 kafka | sasl.login.class = null 14:59:57 kafka | sasl.login.connect.timeout.ms = null 14:59:57 kafka | sasl.login.read.timeout.ms = null 14:59:57 kafka | sasl.login.refresh.buffer.seconds = 300 14:59:57 kafka | sasl.login.refresh.min.period.seconds = 60 14:59:57 kafka | sasl.login.refresh.window.factor = 0.8 14:59:57 kafka | sasl.login.refresh.window.jitter = 0.05 14:59:57 kafka | sasl.login.retry.backoff.max.ms = 10000 14:59:57 kafka | sasl.login.retry.backoff.ms = 100 14:59:57 kafka | sasl.mechanism.controller.protocol = GSSAPI 14:59:57 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 14:59:57 kafka | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 kafka | sasl.oauthbearer.expected.audience = null 14:59:57 kafka | sasl.oauthbearer.expected.issuer = null 14:59:57 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 kafka | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 kafka | sasl.oauthbearer.scope.claim.name = scope 14:59:57 kafka | sasl.oauthbearer.sub.claim.name = sub 14:59:57 kafka | sasl.oauthbearer.token.endpoint.url = null 14:59:57 kafka | sasl.server.callback.handler.class = null 14:59:57 kafka | sasl.server.max.receive.size = 524288 14:59:57 kafka | security.inter.broker.protocol = PLAINTEXT 14:59:57 kafka | security.providers = null 14:59:57 kafka | socket.connection.setup.timeout.max.ms = 30000 14:59:57 kafka | socket.connection.setup.timeout.ms = 10000 14:59:57 kafka | socket.listen.backlog.size = 50 14:59:57 kafka | socket.receive.buffer.bytes = 102400 14:59:57 kafka | socket.request.max.bytes = 104857600 14:59:57 kafka | socket.send.buffer.bytes = 102400 14:59:57 kafka | ssl.cipher.suites = [] 14:59:57 kafka | ssl.client.auth = none 14:59:57 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 kafka | ssl.endpoint.identification.algorithm = https 14:59:57 kafka | ssl.engine.factory.class = null 14:59:57 kafka | ssl.key.password = null 14:59:57 kafka | ssl.keymanager.algorithm = SunX509 14:59:57 kafka | ssl.keystore.certificate.chain = null 14:59:57 kafka | ssl.keystore.key = null 14:59:57 kafka | ssl.keystore.location = null 14:59:57 kafka | ssl.keystore.password = null 14:59:57 kafka | ssl.keystore.type = JKS 14:59:57 kafka | ssl.principal.mapping.rules = DEFAULT 14:59:57 kafka | ssl.protocol = TLSv1.3 14:59:57 kafka | ssl.provider = null 14:59:57 kafka | ssl.secure.random.implementation = null 14:59:57 kafka | ssl.trustmanager.algorithm = PKIX 14:59:57 kafka | ssl.truststore.certificates = null 14:59:57 kafka | ssl.truststore.location = null 14:59:57 kafka | ssl.truststore.password = null 14:59:57 kafka | ssl.truststore.type = JKS 14:59:57 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 14:59:57 kafka | transaction.max.timeout.ms = 900000 14:59:57 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 14:59:57 kafka | transaction.state.log.load.buffer.size = 5242880 14:59:57 kafka | transaction.state.log.min.isr = 2 14:59:57 kafka | transaction.state.log.num.partitions = 50 14:59:57 kafka | transaction.state.log.replication.factor = 3 14:59:57 kafka | transaction.state.log.segment.bytes = 104857600 14:59:57 kafka | transactional.id.expiration.ms = 604800000 14:59:57 kafka | unclean.leader.election.enable = false 14:59:57 kafka | zookeeper.clientCnxnSocket = null 14:59:57 kafka | zookeeper.connect = zookeeper:2181 14:59:57 kafka | zookeeper.connection.timeout.ms = null 14:59:57 kafka | zookeeper.max.in.flight.requests = 10 14:59:57 kafka | zookeeper.metadata.migration.enable = false 14:59:57 kafka | zookeeper.session.timeout.ms = 18000 14:59:57 kafka | zookeeper.set.acl = false 14:59:57 kafka | zookeeper.ssl.cipher.suites = null 14:59:57 kafka | zookeeper.ssl.client.enable = false 14:59:57 kafka | zookeeper.ssl.crl.enable = false 14:59:57 kafka | zookeeper.ssl.enabled.protocols = null 14:59:57 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 14:59:57 kafka | zookeeper.ssl.keystore.location = null 14:59:57 kafka | zookeeper.ssl.keystore.password = null 14:59:57 kafka | zookeeper.ssl.keystore.type = null 14:59:57 kafka | zookeeper.ssl.ocsp.enable = false 14:59:57 kafka | zookeeper.ssl.protocol = TLSv1.2 14:59:57 kafka | zookeeper.ssl.truststore.location = null 14:59:57 kafka | zookeeper.ssl.truststore.password = null 14:59:57 kafka | zookeeper.ssl.truststore.type = null 14:59:57 kafka | (kafka.server.KafkaConfig) 14:59:57 kafka | [2025-06-13 14:56:45,187] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:59:57 kafka | [2025-06-13 14:56:45,188] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:59:57 kafka | [2025-06-13 14:56:45,190] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:59:57 kafka | [2025-06-13 14:56:45,191] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:59:57 kafka | [2025-06-13 14:56:45,231] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:56:45,233] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:56:45,247] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:56:45,247] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:56:45,249] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:56:45,259] INFO Starting the log cleaner (kafka.log.LogCleaner) 14:59:57 kafka | [2025-06-13 14:56:45,313] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 14:59:57 kafka | [2025-06-13 14:56:45,335] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 14:59:57 kafka | [2025-06-13 14:56:45,349] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 14:59:57 kafka | [2025-06-13 14:56:45,392] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 14:59:57 kafka | [2025-06-13 14:56:45,719] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:59:57 kafka | [2025-06-13 14:56:45,723] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 14:59:57 kafka | [2025-06-13 14:56:45,744] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 14:59:57 kafka | [2025-06-13 14:56:45,744] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:59:57 kafka | [2025-06-13 14:56:45,745] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 14:59:57 kafka | [2025-06-13 14:56:45,749] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 14:59:57 kafka | [2025-06-13 14:56:45,753] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 14:59:57 kafka | [2025-06-13 14:56:45,770] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,772] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,773] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,776] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,793] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 14:59:57 kafka | [2025-06-13 14:56:45,817] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 14:59:57 kafka | [2025-06-13 14:56:45,850] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749826605831,1749826605831,1,0,0,72057603688431617,258,0,27 14:59:57 kafka | (kafka.zk.KafkaZkClient) 14:59:57 kafka | [2025-06-13 14:56:45,852] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 14:59:57 kafka | [2025-06-13 14:56:45,905] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 14:59:57 kafka | [2025-06-13 14:56:45,913] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,920] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,921] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:45,935] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 14:59:57 kafka | [2025-06-13 14:56:45,940] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:56:45,945] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:45,947] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:56:45,954] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:45,963] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 14:59:57 kafka | [2025-06-13 14:56:45,974] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 14:59:57 kafka | [2025-06-13 14:56:45,977] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 14:59:57 kafka | [2025-06-13 14:56:45,978] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 14:59:57 kafka | [2025-06-13 14:56:45,999] INFO [MetadataCache brokerId=1] Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 14:59:57 kafka | [2025-06-13 14:56:45,999] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,008] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,013] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,015] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,022] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:59:57 kafka | [2025-06-13 14:56:46,043] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,051] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,055] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 14:59:57 kafka | [2025-06-13 14:56:46,061] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 14:59:57 kafka | [2025-06-13 14:56:46,069] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 14:59:57 kafka | [2025-06-13 14:56:46,072] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,073] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,073] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,073] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,078] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,078] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,078] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,079] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 14:59:57 kafka | [2025-06-13 14:56:46,079] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 14:59:57 kafka | [2025-06-13 14:56:46,080] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,084] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:56:46,095] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 14:59:57 kafka | [2025-06-13 14:56:46,095] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 14:59:57 kafka | [2025-06-13 14:56:46,096] INFO Kafka startTimeMs: 1749826606090 (org.apache.kafka.common.utils.AppInfoParser) 14:59:57 kafka | [2025-06-13 14:56:46,096] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,097] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,098] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 14:59:57 kafka | [2025-06-13 14:56:46,103] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,104] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,104] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,105] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,107] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 14:59:57 kafka | [2025-06-13 14:56:46,108] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,110] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 14:59:57 kafka | [2025-06-13 14:56:46,129] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,129] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,129] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,130] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,132] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,151] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:46,205] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:56:46,209] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:59:57 kafka | [2025-06-13 14:56:46,256] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:59:57 kafka | [2025-06-13 14:56:51,153] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:56:51,154] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,303] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:59:57 kafka | [2025-06-13 14:57:13,304] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,314] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,322] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:59:57 kafka | [2025-06-13 14:57:13,341] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(4jifs8wHRkq0H0ikcQqZKA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,341] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,343] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,343] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,347] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,347] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,367] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,370] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,373] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,373] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,373] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,377] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,380] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(kCb7ZUH-RSyInYvWegYy6A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,384] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,385] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,386] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,387] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,416] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,421] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 14:59:57 kafka | [2025-06-13 14:57:13,421] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,538] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,548] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,549] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,550] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,551] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,552] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,555] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,565] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,566] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,567] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,568] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(4jifs8wHRkq0H0ikcQqZKA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,576] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,582] INFO [Broker id=1] Finished LeaderAndIsr request in 205ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,586] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=4jifs8wHRkq0H0ikcQqZKA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,592] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,593] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,593] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,618] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,619] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,629] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 14:59:57 kafka | [2025-06-13 14:57:13,630] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,636] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,637] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,638] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,638] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,638] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,645] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,645] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,645] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,646] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,646] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,653] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,653] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,653] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,653] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,654] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,662] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,663] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,663] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,663] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,663] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,672] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,673] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,673] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,673] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,673] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,683] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,684] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,684] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,684] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,684] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,693] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,695] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,696] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,696] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,696] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,703] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,704] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,705] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,705] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,705] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,717] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,718] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,719] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,719] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,719] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,728] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,729] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,729] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,729] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,729] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,738] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,740] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,740] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,740] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,740] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,751] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,752] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,752] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,752] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,753] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,761] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,762] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,762] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,763] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,763] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,770] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,771] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,772] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,772] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,772] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,779] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,780] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,780] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,780] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,781] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,788] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,790] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,790] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,790] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,790] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,798] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,798] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,799] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,799] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,799] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,807] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,808] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,808] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,808] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,808] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,815] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,816] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,816] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,816] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,817] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,823] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,824] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,824] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,824] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,824] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,833] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,834] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,835] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,835] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,835] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,843] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,844] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,844] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,844] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,844] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,853] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,854] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,854] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,854] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,854] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,861] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,862] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,862] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,863] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,863] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,869] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,870] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,870] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,870] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,871] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,876] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,877] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,877] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,877] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,877] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,884] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,885] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,886] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,886] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,886] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,894] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,895] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,895] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,896] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,896] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,901] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,902] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,902] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,902] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,902] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,910] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,911] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,911] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,911] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,911] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,917] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,918] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,918] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,918] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,918] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,927] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,928] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,928] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,929] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,929] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,937] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,938] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,939] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,939] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,939] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,946] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,946] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,947] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,947] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,947] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,954] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,955] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,955] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,955] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,956] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,967] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,968] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,968] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,968] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,968] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,978] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,978] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,979] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,979] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,979] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,986] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,987] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,987] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,987] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,987] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:13,995] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:13,995] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:13,996] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,996] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:13,996] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,002] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,002] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,003] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,003] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,003] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,010] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,010] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,010] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,010] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,010] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,019] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,020] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,020] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,020] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,020] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,028] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,028] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,028] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,029] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,029] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,039] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,039] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,040] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,040] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,040] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,048] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,049] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,049] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,049] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,050] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,056] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,057] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,057] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,057] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,057] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,066] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,067] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,067] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,067] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,067] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,074] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,075] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,075] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,075] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,075] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,083] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,084] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,084] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,084] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,084] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,091] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:14,091] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:14,091] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,091] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:14,092] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(kCb7ZUH-RSyInYvWegYy6A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,094] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,095] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,096] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,098] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,099] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,099] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,099] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,102] INFO [Broker id=1] Finished LeaderAndIsr request in 505ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,103] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=kCb7ZUH-RSyInYvWegYy6A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,105] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,106] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,107] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,108] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,109] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,110] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,110] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 11 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,111] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:14,111] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,114] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,114] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,122] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:59:57 kafka | [2025-06-13 14:57:14,169] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,188] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf) (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,208] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 044ad9e7-7f73-4e67-ada5-d3c6274784bc in Empty state. Created a new member id consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:14,213] INFO [GroupCoordinator 1]: Preparing to rebalance group 044ad9e7-7f73-4e67-ada5-d3c6274784bc in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f with group instance id None; client reason: need to re-join with the given member-id: consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f) (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:15,130] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group bcceede6-cf80-4e3b-b200-9e273dce58d5 in Empty state. Created a new member id consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:15,134] INFO [GroupCoordinator 1]: Preparing to rebalance group bcceede6-cf80-4e3b-b200-9e273dce58d5 in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 with group instance id None; client reason: need to re-join with the given member-id: consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11) (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:17,200] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:17,216] INFO [GroupCoordinator 1]: Stabilized group 044ad9e7-7f73-4e67-ada5-d3c6274784bc generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:17,225] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:17,225] INFO [GroupCoordinator 1]: Assignment received from leader consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f for group 044ad9e7-7f73-4e67-ada5-d3c6274784bc for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:18,135] INFO [GroupCoordinator 1]: Stabilized group bcceede6-cf80-4e3b-b200-9e273dce58d5 generation 1 (__consumer_offsets-0) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:18,150] INFO [GroupCoordinator 1]: Assignment received from leader consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 for group bcceede6-cf80-4e3b-b200-9e273dce58d5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:57:20,071] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:59:57 kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(fqYupnA9Qemly06nHPEaTw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 14:59:57 kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,085] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,097] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,097] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,097] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,098] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,098] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,098] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,099] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,099] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,100] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,100] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 14:59:57 kafka | [2025-06-13 14:57:20,100] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,103] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:59:57 kafka | [2025-06-13 14:57:20,103] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 14:59:57 kafka | [2025-06-13 14:57:20,104] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:20,104] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 14:59:57 kafka | [2025-06-13 14:57:20,104] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(fqYupnA9Qemly06nHPEaTw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,108] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,108] INFO [Broker id=1] Finished LeaderAndIsr request in 9ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,109] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=fqYupnA9Qemly06nHPEaTw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,110] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,110] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) 14:59:57 kafka | [2025-06-13 14:57:20,111] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:59:57 kafka | [2025-06-13 14:58:50,717] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:58:50,719] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:58:53,720] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:58:53,723] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:58:53,847] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:58:53,848] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 14:59:57 kafka | [2025-06-13 14:58:53,850] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-c926bd1f-9cd8-41a3-b657-e55b27f99de9, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 14:59:57 policy-api | Waiting for policy-db-migrator port 6824... 14:59:57 policy-api | policy-db-migrator (172.17.0.6:6824) open 14:59:57 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 14:59:57 policy-api | 14:59:57 policy-api | . ____ _ __ _ _ 14:59:57 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:59:57 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:59:57 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:59:57 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 14:59:57 policy-api | =========|_|==============|___/=/_/_/_/ 14:59:57 policy-api | 14:59:57 policy-api | :: Spring Boot :: (v3.4.6) 14:59:57 policy-api | 14:59:57 policy-api | [2025-06-13T14:56:53.336+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 14:59:57 policy-api | [2025-06-13T14:56:53.432+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 32 (/app/api.jar started by policy in /opt/app/policy/api/bin) 14:59:57 policy-api | [2025-06-13T14:56:53.433+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 14:59:57 policy-api | [2025-06-13T14:56:54.851+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:59:57 policy-api | [2025-06-13T14:56:55.025+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 162 ms. Found 6 JPA repository interfaces. 14:59:57 policy-api | [2025-06-13T14:56:55.671+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 14:59:57 policy-api | [2025-06-13T14:56:55.685+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:59:57 policy-api | [2025-06-13T14:56:55.687+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:59:57 policy-api | [2025-06-13T14:56:55.688+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 14:59:57 policy-api | [2025-06-13T14:56:55.730+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 14:59:57 policy-api | [2025-06-13T14:56:55.731+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2240 ms 14:59:57 policy-api | [2025-06-13T14:56:56.047+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:59:57 policy-api | [2025-06-13T14:56:56.139+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 14:59:57 policy-api | [2025-06-13T14:56:56.192+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 14:59:57 policy-api | [2025-06-13T14:56:56.560+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 14:59:57 policy-api | [2025-06-13T14:56:56.601+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:59:57 policy-api | [2025-06-13T14:56:56.795+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@239d9cb7 14:59:57 policy-api | [2025-06-13T14:56:56.798+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:59:57 policy-api | [2025-06-13T14:56:56.876+00:00|INFO|pooling|main] HHH10001005: Database info: 14:59:57 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 14:59:57 policy-api | Database driver: undefined/unknown 14:59:57 policy-api | Database version: 16.4 14:59:57 policy-api | Autocommit mode: undefined/unknown 14:59:57 policy-api | Isolation level: undefined/unknown 14:59:57 policy-api | Minimum pool size: undefined/unknown 14:59:57 policy-api | Maximum pool size: undefined/unknown 14:59:57 policy-api | [2025-06-13T14:56:58.862+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:59:57 policy-api | [2025-06-13T14:56:58.865+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:59:57 policy-api | [2025-06-13T14:56:59.486+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 14:59:57 policy-api | [2025-06-13T14:57:00.341+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 14:59:57 policy-api | [2025-06-13T14:57:01.397+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:59:57 policy-api | [2025-06-13T14:57:01.442+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 14:59:57 policy-api | [2025-06-13T14:57:02.058+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 14:59:57 policy-api | [2025-06-13T14:57:02.195+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:59:57 policy-api | [2025-06-13T14:57:02.214+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 14:59:57 policy-api | [2025-06-13T14:57:02.237+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.55 seconds (process running for 10.142) 14:59:57 policy-api | [2025-06-13T14:57:39.916+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:59:57 policy-api | [2025-06-13T14:57:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 14:59:57 policy-api | [2025-06-13T14:57:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 14:59:57 policy-api | [2025-06-13T14:58:26.298+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: 14:59:57 policy-api | [] 14:59:57 policy-csit | Invoking the robot tests from: xacml-pdp-test.robot xacml-pdp-slas.robot 14:59:57 policy-csit | Run Robot test 14:59:57 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 14:59:57 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 14:59:57 policy-csit | -v POLICY_API_IP:policy-api:6969 14:59:57 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 14:59:57 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 14:59:57 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 14:59:57 policy-csit | -v APEX_IP:policy-apex-pdp:6969 14:59:57 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 14:59:57 policy-csit | -v KAFKA_IP:kafka:9092 14:59:57 policy-csit | -v PROMETHEUS_IP:prometheus:9090 14:59:57 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 14:59:57 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 14:59:57 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 14:59:57 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 14:59:57 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 14:59:57 policy-csit | -v TEMP_FOLDER:/tmp/distribution 14:59:57 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 14:59:57 policy-csit | -v TEST_ENV:docker 14:59:57 policy-csit | -v JAEGER_IP:jaeger:16686 14:59:57 policy-csit | Starting Robot test suites ... 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | Healthcheck :: Verify policy xacml-pdp health check | PASS | 14:59:57 policy-csit | ------------------------------------------------------------------------------ 14:59:57 policy-csit | Metrics :: Verify policy-xacml-pdp is exporting prometheus metrics | PASS | 14:59:57 policy-csit | ------------------------------------------------------------------------------ 14:59:57 policy-csit | MakeTopics :: Creates the Policy topics | PASS | 14:59:57 policy-csit | ------------------------------------------------------------------------------ 14:59:57 policy-csit | ExecuteXacmlPolicy | PASS | 14:59:57 policy-csit | ------------------------------------------------------------------------------ 14:59:57 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Test | PASS | 14:59:57 policy-csit | 4 tests, 4 passed, 0 failed 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 14:59:57 policy-csit | ------------------------------------------------------------------------------ 14:59:57 policy-csit | ValidatePolicyDecisionsTotalCounter :: Validate policy decision co... | PASS | 14:59:57 policy-csit | ------------------------------------------------------------------------------ 14:59:57 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas.Xacml-Pdp-Slas | PASS | 14:59:57 policy-csit | 2 tests, 2 passed, 0 failed 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | Xacml-Pdp-Test & Xacml-Pdp-Slas | PASS | 14:59:57 policy-csit | 6 tests, 6 passed, 0 failed 14:59:57 policy-csit | ============================================================================== 14:59:57 policy-csit | Output: /tmp/results/output.xml 14:59:57 policy-csit | Log: /tmp/results/log.html 14:59:57 policy-csit | Report: /tmp/results/report.html 14:59:57 policy-csit | RESULT: 0 14:59:57 policy-db-migrator | Waiting for postgres port 5432... 14:59:57 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 14:59:57 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 14:59:57 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 14:59:57 policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! 14:59:57 policy-db-migrator | Initializing policyadmin... 14:59:57 policy-db-migrator | 321 blocks 14:59:57 policy-db-migrator | Preparing upgrade release version: 0800 14:59:57 policy-db-migrator | Preparing upgrade release version: 0900 14:59:57 policy-db-migrator | Preparing upgrade release version: 1000 14:59:57 policy-db-migrator | Preparing upgrade release version: 1100 14:59:57 policy-db-migrator | Preparing upgrade release version: 1200 14:59:57 policy-db-migrator | Preparing upgrade release version: 1300 14:59:57 policy-db-migrator | Done 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | -------------+--------- 14:59:57 policy-db-migrator | policyadmin | 0 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 14:59:57 policy-db-migrator | (0 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | upgrade: 0 -> 1300 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0450-pdpgroup.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0470-pdp.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0570-toscadatatype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0630-toscanodetype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0660-toscaparameter.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0670-toscapolicies.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0690-toscapolicy.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0730-toscaproperty.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0770-toscarequirement.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0780-toscarequirements.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0820-toscatrigger.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-pdp.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0210-sequence.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0220-sequence.sql 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0120-toscatrigger.sql 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0140-toscaparameter.sql 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0150-toscaproperty.sql 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-upgrade.sql 14:59:57 policy-db-migrator | msg 14:59:57 policy-db-migrator | --------------------------- 14:59:57 policy-db-migrator | upgrade to 1100 completed 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:59:57 policy-db-migrator | DROP INDEX 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0120-audit_sequence.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 14:59:57 policy-db-migrator | DROP TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | policyadmin: OK: upgrade (1300) 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | -------------+--------- 14:59:57 policy-db-migrator | policyadmin | 1300 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 14:59:57 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.18976 14:59:57 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.235723 14:59:57 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.288446 14:59:57 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.346232 14:59:57 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.402747 14:59:57 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.453333 14:59:57 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.505238 14:59:57 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.552547 14:59:57 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.609198 14:59:57 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.6593 14:59:57 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.710297 14:59:57 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.757171 14:59:57 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.807078 14:59:57 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.859371 14:59:57 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.910159 14:59:57 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:39.964163 14:59:57 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.013874 14:59:57 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.065374 14:59:57 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.122799 14:59:57 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.178803 14:59:57 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.226544 14:59:57 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.277946 14:59:57 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.32896 14:59:57 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.379188 14:59:57 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.428567 14:59:57 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.486557 14:59:57 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.539177 14:59:57 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.592826 14:59:57 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.645962 14:59:57 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.698371 14:59:57 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.757659 14:59:57 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.806122 14:59:57 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.859425 14:59:57 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.920719 14:59:57 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:40.982926 14:59:57 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.037888 14:59:57 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.105255 14:59:57 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.165672 14:59:57 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.231306 14:59:57 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.295163 14:59:57 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.367422 14:59:57 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.424178 14:59:57 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.509181 14:59:57 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.566089 14:59:57 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.634884 14:59:57 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.688548 14:59:57 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:41.743224 14:59:57 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.072799 14:59:57 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.465725 14:59:57 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.713722 14:59:57 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:42.989643 14:59:57 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.04045 14:59:57 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.089425 14:59:57 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.142743 14:59:57 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.203577 14:59:57 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.251598 14:59:57 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.306785 14:59:57 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.357922 14:59:57 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.406305 14:59:57 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.465583 14:59:57 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.521667 14:59:57 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.57892 14:59:57 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.637727 14:59:57 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.700012 14:59:57 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.759518 14:59:57 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.823759 14:59:57 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.877346 14:59:57 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.932977 14:59:57 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:43.979422 14:59:57 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.03203 14:59:57 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.093639 14:59:57 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.154862 14:59:57 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.217875 14:59:57 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.26724 14:59:57 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.328594 14:59:57 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.385515 14:59:57 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.438286 14:59:57 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.484409 14:59:57 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.53775 14:59:57 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.589197 14:59:57 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.637113 14:59:57 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.684419 14:59:57 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.743862 14:59:57 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.793522 14:59:57 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.842049 14:59:57 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.898488 14:59:57 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.952905 14:59:57 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:44.999682 14:59:57 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.050318 14:59:57 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.108885 14:59:57 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.155557 14:59:57 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.206137 14:59:57 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.251001 14:59:57 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.301566 14:59:57 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.383107 14:59:57 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456390800u | 1 | 2025-06-13 14:56:45.443344 14:59:57 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.508293 14:59:57 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.565212 14:59:57 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.61694 14:59:57 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.67316 14:59:57 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.744069 14:59:57 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.798014 14:59:57 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.854932 14:59:57 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.936889 14:59:57 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:45.990733 14:59:57 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.059254 14:59:57 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.131309 14:59:57 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.190798 14:59:57 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306251456390900u | 1 | 2025-06-13 14:56:46.248394 14:59:57 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.30331 14:59:57 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.357894 14:59:57 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.422851 14:59:57 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.476869 14:59:57 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.526239 14:59:57 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.581657 14:59:57 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.64603 14:59:57 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.698133 14:59:57 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306251456391000u | 1 | 2025-06-13 14:56:46.745466 14:59:57 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306251456391100u | 1 | 2025-06-13 14:56:46.789182 14:59:57 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:46.836778 14:59:57 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:46.897075 14:59:57 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:46.955134 14:59:57 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306251456391200u | 1 | 2025-06-13 14:56:47.022035 14:59:57 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306251456391300u | 1 | 2025-06-13 14:56:47.065072 14:59:57 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306251456391300u | 1 | 2025-06-13 14:56:47.110425 14:59:57 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306251456391300u | 1 | 2025-06-13 14:56:47.157675 14:59:57 policy-db-migrator | (126 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | policyadmin: OK @ 1300 14:59:57 policy-db-migrator | Initializing clampacm... 14:59:57 policy-db-migrator | 97 blocks 14:59:57 policy-db-migrator | Preparing upgrade release version: 1400 14:59:57 policy-db-migrator | Preparing upgrade release version: 1500 14:59:57 policy-db-migrator | Preparing upgrade release version: 1600 14:59:57 policy-db-migrator | Preparing upgrade release version: 1601 14:59:57 policy-db-migrator | Preparing upgrade release version: 1700 14:59:57 policy-db-migrator | Preparing upgrade release version: 1701 14:59:57 policy-db-migrator | Done 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | ----------+--------- 14:59:57 policy-db-migrator | clampacm | 0 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 14:59:57 policy-db-migrator | (0 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | upgrade: 0 -> 1701 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-automationcomposition.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0500-participant.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-automationcomposition.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0300-participantreplica.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0400-participant.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-automationcomposition.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-automationcomposition.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-message.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-messagejob.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0200-automationcomposition.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0800-participantreplica.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | UPDATE 0 14:59:57 policy-db-migrator | ALTER TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | clampacm: OK: upgrade (1701) 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | ----------+--------- 14:59:57 policy-db-migrator | clampacm | 1701 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 14:59:57 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.420588 14:59:57 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.502112 14:59:57 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.55626 14:59:57 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.608803 14:59:57 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.666759 14:59:57 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.720684 14:59:57 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.772399 14:59:57 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.820544 14:59:57 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.868675 14:59:57 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.931443 14:59:57 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:48.986197 14:59:57 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:49.038842 14:59:57 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306251456481400u | 1 | 2025-06-13 14:56:49.09263 14:59:57 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.14466 14:59:57 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.196379 14:59:57 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.25297 14:59:57 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.302315 14:59:57 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.352949 14:59:57 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.409067 14:59:57 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.457724 14:59:57 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306251456481500u | 1 | 2025-06-13 14:56:49.501656 14:59:57 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306251456481600u | 1 | 2025-06-13 14:56:49.549514 14:59:57 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306251456481600u | 1 | 2025-06-13 14:56:49.596024 14:59:57 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306251456481601u | 1 | 2025-06-13 14:56:49.640674 14:59:57 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306251456481601u | 1 | 2025-06-13 14:56:49.695805 14:59:57 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306251456481700u | 1 | 2025-06-13 14:56:49.741225 14:59:57 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306251456481700u | 1 | 2025-06-13 14:56:49.807165 14:59:57 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306251456481700u | 1 | 2025-06-13 14:56:49.861697 14:59:57 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:49.914433 14:59:57 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:49.973585 14:59:57 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.023946 14:59:57 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.076834 14:59:57 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.126906 14:59:57 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.180079 14:59:57 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.241525 14:59:57 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.290505 14:59:57 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306251456481701u | 1 | 2025-06-13 14:56:50.332297 14:59:57 policy-db-migrator | (37 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | clampacm: OK @ 1701 14:59:57 policy-db-migrator | Initializing pooling... 14:59:57 policy-db-migrator | 4 blocks 14:59:57 policy-db-migrator | Preparing upgrade release version: 1600 14:59:57 policy-db-migrator | Done 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | ---------+--------- 14:59:57 policy-db-migrator | pooling | 0 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 14:59:57 policy-db-migrator | (0 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | pooling: upgrade available: 0 -> 1600 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | upgrade: 0 -> 1600 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-distributed.locking.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | pooling: OK: upgrade (1600) 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | ---------+--------- 14:59:57 policy-db-migrator | pooling | 1600 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 14:59:57 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306251456501600u | 1 | 2025-06-13 14:56:50.977018 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | pooling: OK @ 1600 14:59:57 policy-db-migrator | Initializing operationshistory... 14:59:57 policy-db-migrator | 6 blocks 14:59:57 policy-db-migrator | Preparing upgrade release version: 1600 14:59:57 policy-db-migrator | Done 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | -------------------+--------- 14:59:57 policy-db-migrator | operationshistory | 0 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 14:59:57 policy-db-migrator | (0 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | upgrade: 0 -> 1600 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | rc=0 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | > upgrade 0110-operationshistory.sql 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | CREATE INDEX 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | INSERT 0 1 14:59:57 policy-db-migrator | operationshistory: OK: upgrade (1600) 14:59:57 policy-db-migrator | List of databases 14:59:57 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 14:59:57 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 14:59:57 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 14:59:57 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 14:59:57 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 14:59:57 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 14:59:57 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 14:59:57 policy-db-migrator | (9 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | CREATE TABLE 14:59:57 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 14:59:57 policy-db-migrator | name | version 14:59:57 policy-db-migrator | -------------------+--------- 14:59:57 policy-db-migrator | operationshistory | 1600 14:59:57 policy-db-migrator | (1 row) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 14:59:57 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 14:59:57 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306251456511600u | 1 | 2025-06-13 14:56:51.600731 14:59:57 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306251456511600u | 1 | 2025-06-13 14:56:51.662828 14:59:57 policy-db-migrator | (2 rows) 14:59:57 policy-db-migrator | 14:59:57 policy-db-migrator | operationshistory: OK @ 1600 14:59:57 policy-pap | Waiting for api port 6969... 14:59:57 policy-pap | api (172.17.0.7:6969) open 14:59:57 policy-pap | Waiting for kafka port 9092... 14:59:57 policy-pap | kafka (172.17.0.5:9092) open 14:59:57 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 14:59:57 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 14:59:57 policy-pap | 14:59:57 policy-pap | . ____ _ __ _ _ 14:59:57 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:59:57 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:59:57 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:59:57 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 14:59:57 policy-pap | =========|_|==============|___/=/_/_/_/ 14:59:57 policy-pap | 14:59:57 policy-pap | :: Spring Boot :: (v3.4.6) 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:04.008+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 54 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 14:59:57 policy-pap | [2025-06-13T14:57:04.009+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 14:59:57 policy-pap | [2025-06-13T14:57:05.377+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:59:57 policy-pap | [2025-06-13T14:57:05.468+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 78 ms. Found 7 JPA repository interfaces. 14:59:57 policy-pap | [2025-06-13T14:57:06.421+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 14:59:57 policy-pap | [2025-06-13T14:57:06.435+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:59:57 policy-pap | [2025-06-13T14:57:06.437+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:59:57 policy-pap | [2025-06-13T14:57:06.437+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 14:59:57 policy-pap | [2025-06-13T14:57:06.490+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 14:59:57 policy-pap | [2025-06-13T14:57:06.490+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2425 ms 14:59:57 policy-pap | [2025-06-13T14:57:06.929+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:59:57 policy-pap | [2025-06-13T14:57:07.010+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 14:59:57 policy-pap | [2025-06-13T14:57:07.054+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 14:59:57 policy-pap | [2025-06-13T14:57:07.481+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 14:59:57 policy-pap | [2025-06-13T14:57:07.523+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:59:57 policy-pap | [2025-06-13T14:57:07.729+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6e337ba1 14:59:57 policy-pap | [2025-06-13T14:57:07.731+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:59:57 policy-pap | [2025-06-13T14:57:07.823+00:00|INFO|pooling|main] HHH10001005: Database info: 14:59:57 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 14:59:57 policy-pap | Database driver: undefined/unknown 14:59:57 policy-pap | Database version: 16.4 14:59:57 policy-pap | Autocommit mode: undefined/unknown 14:59:57 policy-pap | Isolation level: undefined/unknown 14:59:57 policy-pap | Minimum pool size: undefined/unknown 14:59:57 policy-pap | Maximum pool size: undefined/unknown 14:59:57 policy-pap | [2025-06-13T14:57:09.740+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:59:57 policy-pap | [2025-06-13T14:57:09.744+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:59:57 policy-pap | [2025-06-13T14:57:10.931+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:59:57 policy-pap | allow.auto.create.topics = true 14:59:57 policy-pap | auto.commit.interval.ms = 5000 14:59:57 policy-pap | auto.include.jmx.reporter = true 14:59:57 policy-pap | auto.offset.reset = latest 14:59:57 policy-pap | bootstrap.servers = [kafka:9092] 14:59:57 policy-pap | check.crcs = true 14:59:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:59:57 policy-pap | client.id = consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-1 14:59:57 policy-pap | client.rack = 14:59:57 policy-pap | connections.max.idle.ms = 540000 14:59:57 policy-pap | default.api.timeout.ms = 60000 14:59:57 policy-pap | enable.auto.commit = true 14:59:57 policy-pap | enable.metrics.push = true 14:59:57 policy-pap | exclude.internal.topics = true 14:59:57 policy-pap | fetch.max.bytes = 52428800 14:59:57 policy-pap | fetch.max.wait.ms = 500 14:59:57 policy-pap | fetch.min.bytes = 1 14:59:57 policy-pap | group.id = 044ad9e7-7f73-4e67-ada5-d3c6274784bc 14:59:57 policy-pap | group.instance.id = null 14:59:57 policy-pap | group.protocol = classic 14:59:57 policy-pap | group.remote.assignor = null 14:59:57 policy-pap | heartbeat.interval.ms = 3000 14:59:57 policy-pap | interceptor.classes = [] 14:59:57 policy-pap | internal.leave.group.on.close = true 14:59:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:59:57 policy-pap | isolation.level = read_uncommitted 14:59:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | max.partition.fetch.bytes = 1048576 14:59:57 policy-pap | max.poll.interval.ms = 300000 14:59:57 policy-pap | max.poll.records = 500 14:59:57 policy-pap | metadata.max.age.ms = 300000 14:59:57 policy-pap | metadata.recovery.strategy = none 14:59:57 policy-pap | metric.reporters = [] 14:59:57 policy-pap | metrics.num.samples = 2 14:59:57 policy-pap | metrics.recording.level = INFO 14:59:57 policy-pap | metrics.sample.window.ms = 30000 14:59:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:59:57 policy-pap | receive.buffer.bytes = 65536 14:59:57 policy-pap | reconnect.backoff.max.ms = 1000 14:59:57 policy-pap | reconnect.backoff.ms = 50 14:59:57 policy-pap | request.timeout.ms = 30000 14:59:57 policy-pap | retry.backoff.max.ms = 1000 14:59:57 policy-pap | retry.backoff.ms = 100 14:59:57 policy-pap | sasl.client.callback.handler.class = null 14:59:57 policy-pap | sasl.jaas.config = null 14:59:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-pap | sasl.kerberos.service.name = null 14:59:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-pap | sasl.login.callback.handler.class = null 14:59:57 policy-pap | sasl.login.class = null 14:59:57 policy-pap | sasl.login.connect.timeout.ms = null 14:59:57 policy-pap | sasl.login.read.timeout.ms = null 14:59:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.mechanism = GSSAPI 14:59:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:59:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-pap | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-pap | security.protocol = PLAINTEXT 14:59:57 policy-pap | security.providers = null 14:59:57 policy-pap | send.buffer.bytes = 131072 14:59:57 policy-pap | session.timeout.ms = 45000 14:59:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-pap | ssl.cipher.suites = null 14:59:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:59:57 policy-pap | ssl.engine.factory.class = null 14:59:57 policy-pap | ssl.key.password = null 14:59:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:59:57 policy-pap | ssl.keystore.certificate.chain = null 14:59:57 policy-pap | ssl.keystore.key = null 14:59:57 policy-pap | ssl.keystore.location = null 14:59:57 policy-pap | ssl.keystore.password = null 14:59:57 policy-pap | ssl.keystore.type = JKS 14:59:57 policy-pap | ssl.protocol = TLSv1.3 14:59:57 policy-pap | ssl.provider = null 14:59:57 policy-pap | ssl.secure.random.implementation = null 14:59:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-pap | ssl.truststore.certificates = null 14:59:57 policy-pap | ssl.truststore.location = null 14:59:57 policy-pap | ssl.truststore.password = null 14:59:57 policy-pap | ssl.truststore.type = JKS 14:59:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:10.985+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-pap | [2025-06-13T14:57:11.122+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-pap | [2025-06-13T14:57:11.122+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-pap | [2025-06-13T14:57:11.122+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826631120 14:59:57 policy-pap | [2025-06-13T14:57:11.124+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-1, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Subscribed to topic(s): policy-pdp-pap 14:59:57 policy-pap | [2025-06-13T14:57:11.125+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:59:57 policy-pap | allow.auto.create.topics = true 14:59:57 policy-pap | auto.commit.interval.ms = 5000 14:59:57 policy-pap | auto.include.jmx.reporter = true 14:59:57 policy-pap | auto.offset.reset = latest 14:59:57 policy-pap | bootstrap.servers = [kafka:9092] 14:59:57 policy-pap | check.crcs = true 14:59:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:59:57 policy-pap | client.id = consumer-policy-pap-2 14:59:57 policy-pap | client.rack = 14:59:57 policy-pap | connections.max.idle.ms = 540000 14:59:57 policy-pap | default.api.timeout.ms = 60000 14:59:57 policy-pap | enable.auto.commit = true 14:59:57 policy-pap | enable.metrics.push = true 14:59:57 policy-pap | exclude.internal.topics = true 14:59:57 policy-pap | fetch.max.bytes = 52428800 14:59:57 policy-pap | fetch.max.wait.ms = 500 14:59:57 policy-pap | fetch.min.bytes = 1 14:59:57 policy-pap | group.id = policy-pap 14:59:57 policy-pap | group.instance.id = null 14:59:57 policy-pap | group.protocol = classic 14:59:57 policy-pap | group.remote.assignor = null 14:59:57 policy-pap | heartbeat.interval.ms = 3000 14:59:57 policy-pap | interceptor.classes = [] 14:59:57 policy-pap | internal.leave.group.on.close = true 14:59:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:59:57 policy-pap | isolation.level = read_uncommitted 14:59:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | max.partition.fetch.bytes = 1048576 14:59:57 policy-pap | max.poll.interval.ms = 300000 14:59:57 policy-pap | max.poll.records = 500 14:59:57 policy-pap | metadata.max.age.ms = 300000 14:59:57 policy-pap | metadata.recovery.strategy = none 14:59:57 policy-pap | metric.reporters = [] 14:59:57 policy-pap | metrics.num.samples = 2 14:59:57 policy-pap | metrics.recording.level = INFO 14:59:57 policy-pap | metrics.sample.window.ms = 30000 14:59:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:59:57 policy-pap | receive.buffer.bytes = 65536 14:59:57 policy-pap | reconnect.backoff.max.ms = 1000 14:59:57 policy-pap | reconnect.backoff.ms = 50 14:59:57 policy-pap | request.timeout.ms = 30000 14:59:57 policy-pap | retry.backoff.max.ms = 1000 14:59:57 policy-pap | retry.backoff.ms = 100 14:59:57 policy-pap | sasl.client.callback.handler.class = null 14:59:57 policy-pap | sasl.jaas.config = null 14:59:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-pap | sasl.kerberos.service.name = null 14:59:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-pap | sasl.login.callback.handler.class = null 14:59:57 policy-pap | sasl.login.class = null 14:59:57 policy-pap | sasl.login.connect.timeout.ms = null 14:59:57 policy-pap | sasl.login.read.timeout.ms = null 14:59:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.mechanism = GSSAPI 14:59:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:59:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-pap | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-pap | security.protocol = PLAINTEXT 14:59:57 policy-pap | security.providers = null 14:59:57 policy-pap | send.buffer.bytes = 131072 14:59:57 policy-pap | session.timeout.ms = 45000 14:59:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-pap | ssl.cipher.suites = null 14:59:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:59:57 policy-pap | ssl.engine.factory.class = null 14:59:57 policy-pap | ssl.key.password = null 14:59:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:59:57 policy-pap | ssl.keystore.certificate.chain = null 14:59:57 policy-pap | ssl.keystore.key = null 14:59:57 policy-pap | ssl.keystore.location = null 14:59:57 policy-pap | ssl.keystore.password = null 14:59:57 policy-pap | ssl.keystore.type = JKS 14:59:57 policy-pap | ssl.protocol = TLSv1.3 14:59:57 policy-pap | ssl.provider = null 14:59:57 policy-pap | ssl.secure.random.implementation = null 14:59:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-pap | ssl.truststore.certificates = null 14:59:57 policy-pap | ssl.truststore.location = null 14:59:57 policy-pap | ssl.truststore.password = null 14:59:57 policy-pap | ssl.truststore.type = JKS 14:59:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:11.125+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826631133 14:59:57 policy-pap | [2025-06-13T14:57:11.133+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:59:57 policy-pap | [2025-06-13T14:57:11.481+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=xacml, supportedPolicyTypes=[onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0, onap.policies.monitoring.* 1.0.0, onap.policies.optimization.* 1.0.0, onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0, onap.policies.native.Xacml 1.0.0, onap.policies.Naming 1.0.0, onap.policies.match.* 1.0.0], policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 14:59:57 policy-pap | [2025-06-13T14:57:11.601+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:59:57 policy-pap | [2025-06-13T14:57:11.677+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 14:59:57 policy-pap | [2025-06-13T14:57:11.887+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 14:59:57 policy-pap | [2025-06-13T14:57:12.610+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 14:59:57 policy-pap | [2025-06-13T14:57:12.732+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:59:57 policy-pap | [2025-06-13T14:57:12.753+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 14:59:57 policy-pap | [2025-06-13T14:57:12.775+00:00|INFO|ServiceManager|main] Policy PAP starting 14:59:57 policy-pap | [2025-06-13T14:57:12.775+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 14:59:57 policy-pap | [2025-06-13T14:57:12.775+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 14:59:57 policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 14:59:57 policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 14:59:57 policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 14:59:57 policy-pap | [2025-06-13T14:57:12.776+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 14:59:57 policy-pap | [2025-06-13T14:57:12.778+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=044ad9e7-7f73-4e67-ada5-d3c6274784bc, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2096ade6 14:59:57 policy-pap | [2025-06-13T14:57:12.788+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=044ad9e7-7f73-4e67-ada5-d3c6274784bc, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:59:57 policy-pap | [2025-06-13T14:57:12.788+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:59:57 policy-pap | allow.auto.create.topics = true 14:59:57 policy-pap | auto.commit.interval.ms = 5000 14:59:57 policy-pap | auto.include.jmx.reporter = true 14:59:57 policy-pap | auto.offset.reset = latest 14:59:57 policy-pap | bootstrap.servers = [kafka:9092] 14:59:57 policy-pap | check.crcs = true 14:59:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:59:57 policy-pap | client.id = consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3 14:59:57 policy-pap | client.rack = 14:59:57 policy-pap | connections.max.idle.ms = 540000 14:59:57 policy-pap | default.api.timeout.ms = 60000 14:59:57 policy-pap | enable.auto.commit = true 14:59:57 policy-pap | enable.metrics.push = true 14:59:57 policy-pap | exclude.internal.topics = true 14:59:57 policy-pap | fetch.max.bytes = 52428800 14:59:57 policy-pap | fetch.max.wait.ms = 500 14:59:57 policy-pap | fetch.min.bytes = 1 14:59:57 policy-pap | group.id = 044ad9e7-7f73-4e67-ada5-d3c6274784bc 14:59:57 policy-pap | group.instance.id = null 14:59:57 policy-pap | group.protocol = classic 14:59:57 policy-pap | group.remote.assignor = null 14:59:57 policy-pap | heartbeat.interval.ms = 3000 14:59:57 policy-pap | interceptor.classes = [] 14:59:57 policy-pap | internal.leave.group.on.close = true 14:59:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:59:57 policy-pap | isolation.level = read_uncommitted 14:59:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | max.partition.fetch.bytes = 1048576 14:59:57 policy-pap | max.poll.interval.ms = 300000 14:59:57 policy-pap | max.poll.records = 500 14:59:57 policy-pap | metadata.max.age.ms = 300000 14:59:57 policy-pap | metadata.recovery.strategy = none 14:59:57 policy-pap | metric.reporters = [] 14:59:57 policy-pap | metrics.num.samples = 2 14:59:57 policy-pap | metrics.recording.level = INFO 14:59:57 policy-pap | metrics.sample.window.ms = 30000 14:59:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:59:57 policy-pap | receive.buffer.bytes = 65536 14:59:57 policy-pap | reconnect.backoff.max.ms = 1000 14:59:57 policy-pap | reconnect.backoff.ms = 50 14:59:57 policy-pap | request.timeout.ms = 30000 14:59:57 policy-pap | retry.backoff.max.ms = 1000 14:59:57 policy-pap | retry.backoff.ms = 100 14:59:57 policy-pap | sasl.client.callback.handler.class = null 14:59:57 policy-pap | sasl.jaas.config = null 14:59:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-pap | sasl.kerberos.service.name = null 14:59:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-pap | sasl.login.callback.handler.class = null 14:59:57 policy-pap | sasl.login.class = null 14:59:57 policy-pap | sasl.login.connect.timeout.ms = null 14:59:57 policy-pap | sasl.login.read.timeout.ms = null 14:59:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.mechanism = GSSAPI 14:59:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:59:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-pap | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-pap | security.protocol = PLAINTEXT 14:59:57 policy-pap | security.providers = null 14:59:57 policy-pap | send.buffer.bytes = 131072 14:59:57 policy-pap | session.timeout.ms = 45000 14:59:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-pap | ssl.cipher.suites = null 14:59:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:59:57 policy-pap | ssl.engine.factory.class = null 14:59:57 policy-pap | ssl.key.password = null 14:59:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:59:57 policy-pap | ssl.keystore.certificate.chain = null 14:59:57 policy-pap | ssl.keystore.key = null 14:59:57 policy-pap | ssl.keystore.location = null 14:59:57 policy-pap | ssl.keystore.password = null 14:59:57 policy-pap | ssl.keystore.type = JKS 14:59:57 policy-pap | ssl.protocol = TLSv1.3 14:59:57 policy-pap | ssl.provider = null 14:59:57 policy-pap | ssl.secure.random.implementation = null 14:59:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-pap | ssl.truststore.certificates = null 14:59:57 policy-pap | ssl.truststore.location = null 14:59:57 policy-pap | ssl.truststore.password = null 14:59:57 policy-pap | ssl.truststore.type = JKS 14:59:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:12.789+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-pap | [2025-06-13T14:57:12.795+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-pap | [2025-06-13T14:57:12.795+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-pap | [2025-06-13T14:57:12.795+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632795 14:59:57 policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Subscribed to topic(s): policy-pdp-pap 14:59:57 policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 14:59:57 policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7f2861c0-7dab-4ee1-a7da-eaad47fd4b7e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@687fa4d0 14:59:57 policy-pap | [2025-06-13T14:57:12.796+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7f2861c0-7dab-4ee1-a7da-eaad47fd4b7e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:59:57 policy-pap | [2025-06-13T14:57:12.797+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:59:57 policy-pap | allow.auto.create.topics = true 14:59:57 policy-pap | auto.commit.interval.ms = 5000 14:59:57 policy-pap | auto.include.jmx.reporter = true 14:59:57 policy-pap | auto.offset.reset = latest 14:59:57 policy-pap | bootstrap.servers = [kafka:9092] 14:59:57 policy-pap | check.crcs = true 14:59:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:59:57 policy-pap | client.id = consumer-policy-pap-4 14:59:57 policy-pap | client.rack = 14:59:57 policy-pap | connections.max.idle.ms = 540000 14:59:57 policy-pap | default.api.timeout.ms = 60000 14:59:57 policy-pap | enable.auto.commit = true 14:59:57 policy-pap | enable.metrics.push = true 14:59:57 policy-pap | exclude.internal.topics = true 14:59:57 policy-pap | fetch.max.bytes = 52428800 14:59:57 policy-pap | fetch.max.wait.ms = 500 14:59:57 policy-pap | fetch.min.bytes = 1 14:59:57 policy-pap | group.id = policy-pap 14:59:57 policy-pap | group.instance.id = null 14:59:57 policy-pap | group.protocol = classic 14:59:57 policy-pap | group.remote.assignor = null 14:59:57 policy-pap | heartbeat.interval.ms = 3000 14:59:57 policy-pap | interceptor.classes = [] 14:59:57 policy-pap | internal.leave.group.on.close = true 14:59:57 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:59:57 policy-pap | isolation.level = read_uncommitted 14:59:57 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | max.partition.fetch.bytes = 1048576 14:59:57 policy-pap | max.poll.interval.ms = 300000 14:59:57 policy-pap | max.poll.records = 500 14:59:57 policy-pap | metadata.max.age.ms = 300000 14:59:57 policy-pap | metadata.recovery.strategy = none 14:59:57 policy-pap | metric.reporters = [] 14:59:57 policy-pap | metrics.num.samples = 2 14:59:57 policy-pap | metrics.recording.level = INFO 14:59:57 policy-pap | metrics.sample.window.ms = 30000 14:59:57 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:59:57 policy-pap | receive.buffer.bytes = 65536 14:59:57 policy-pap | reconnect.backoff.max.ms = 1000 14:59:57 policy-pap | reconnect.backoff.ms = 50 14:59:57 policy-pap | request.timeout.ms = 30000 14:59:57 policy-pap | retry.backoff.max.ms = 1000 14:59:57 policy-pap | retry.backoff.ms = 100 14:59:57 policy-pap | sasl.client.callback.handler.class = null 14:59:57 policy-pap | sasl.jaas.config = null 14:59:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-pap | sasl.kerberos.service.name = null 14:59:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-pap | sasl.login.callback.handler.class = null 14:59:57 policy-pap | sasl.login.class = null 14:59:57 policy-pap | sasl.login.connect.timeout.ms = null 14:59:57 policy-pap | sasl.login.read.timeout.ms = null 14:59:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.mechanism = GSSAPI 14:59:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:59:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-pap | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-pap | security.protocol = PLAINTEXT 14:59:57 policy-pap | security.providers = null 14:59:57 policy-pap | send.buffer.bytes = 131072 14:59:57 policy-pap | session.timeout.ms = 45000 14:59:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-pap | ssl.cipher.suites = null 14:59:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:59:57 policy-pap | ssl.engine.factory.class = null 14:59:57 policy-pap | ssl.key.password = null 14:59:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:59:57 policy-pap | ssl.keystore.certificate.chain = null 14:59:57 policy-pap | ssl.keystore.key = null 14:59:57 policy-pap | ssl.keystore.location = null 14:59:57 policy-pap | ssl.keystore.password = null 14:59:57 policy-pap | ssl.keystore.type = JKS 14:59:57 policy-pap | ssl.protocol = TLSv1.3 14:59:57 policy-pap | ssl.provider = null 14:59:57 policy-pap | ssl.secure.random.implementation = null 14:59:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-pap | ssl.truststore.certificates = null 14:59:57 policy-pap | ssl.truststore.location = null 14:59:57 policy-pap | ssl.truststore.password = null 14:59:57 policy-pap | ssl.truststore.type = JKS 14:59:57 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:12.797+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-pap | [2025-06-13T14:57:12.802+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-pap | [2025-06-13T14:57:12.802+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-pap | [2025-06-13T14:57:12.802+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632802 14:59:57 policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:59:57 policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|ServiceManager|main] Policy PAP starting topics 14:59:57 policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=7f2861c0-7dab-4ee1-a7da-eaad47fd4b7e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:59:57 policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=044ad9e7-7f73-4e67-ada5-d3c6274784bc, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:59:57 policy-pap | [2025-06-13T14:57:12.803+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4bb38c68-3eb8-4da3-9a8e-86958093f792, alive=false, publisher=null]]: starting 14:59:57 policy-pap | [2025-06-13T14:57:12.814+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:59:57 policy-pap | acks = -1 14:59:57 policy-pap | auto.include.jmx.reporter = true 14:59:57 policy-pap | batch.size = 16384 14:59:57 policy-pap | bootstrap.servers = [kafka:9092] 14:59:57 policy-pap | buffer.memory = 33554432 14:59:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:59:57 policy-pap | client.id = producer-1 14:59:57 policy-pap | compression.gzip.level = -1 14:59:57 policy-pap | compression.lz4.level = 9 14:59:57 policy-pap | compression.type = none 14:59:57 policy-pap | compression.zstd.level = 3 14:59:57 policy-pap | connections.max.idle.ms = 540000 14:59:57 policy-pap | delivery.timeout.ms = 120000 14:59:57 policy-pap | enable.idempotence = true 14:59:57 policy-pap | enable.metrics.push = true 14:59:57 policy-pap | interceptor.classes = [] 14:59:57 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:59:57 policy-pap | linger.ms = 0 14:59:57 policy-pap | max.block.ms = 60000 14:59:57 policy-pap | max.in.flight.requests.per.connection = 5 14:59:57 policy-pap | max.request.size = 1048576 14:59:57 policy-pap | metadata.max.age.ms = 300000 14:59:57 policy-pap | metadata.max.idle.ms = 300000 14:59:57 policy-pap | metadata.recovery.strategy = none 14:59:57 policy-pap | metric.reporters = [] 14:59:57 policy-pap | metrics.num.samples = 2 14:59:57 policy-pap | metrics.recording.level = INFO 14:59:57 policy-pap | metrics.sample.window.ms = 30000 14:59:57 policy-pap | partitioner.adaptive.partitioning.enable = true 14:59:57 policy-pap | partitioner.availability.timeout.ms = 0 14:59:57 policy-pap | partitioner.class = null 14:59:57 policy-pap | partitioner.ignore.keys = false 14:59:57 policy-pap | receive.buffer.bytes = 32768 14:59:57 policy-pap | reconnect.backoff.max.ms = 1000 14:59:57 policy-pap | reconnect.backoff.ms = 50 14:59:57 policy-pap | request.timeout.ms = 30000 14:59:57 policy-pap | retries = 2147483647 14:59:57 policy-pap | retry.backoff.max.ms = 1000 14:59:57 policy-pap | retry.backoff.ms = 100 14:59:57 policy-pap | sasl.client.callback.handler.class = null 14:59:57 policy-pap | sasl.jaas.config = null 14:59:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-pap | sasl.kerberos.service.name = null 14:59:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-pap | sasl.login.callback.handler.class = null 14:59:57 policy-pap | sasl.login.class = null 14:59:57 policy-pap | sasl.login.connect.timeout.ms = null 14:59:57 policy-pap | sasl.login.read.timeout.ms = null 14:59:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.mechanism = GSSAPI 14:59:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:59:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-pap | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-pap | security.protocol = PLAINTEXT 14:59:57 policy-pap | security.providers = null 14:59:57 policy-pap | send.buffer.bytes = 131072 14:59:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-pap | ssl.cipher.suites = null 14:59:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:59:57 policy-pap | ssl.engine.factory.class = null 14:59:57 policy-pap | ssl.key.password = null 14:59:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:59:57 policy-pap | ssl.keystore.certificate.chain = null 14:59:57 policy-pap | ssl.keystore.key = null 14:59:57 policy-pap | ssl.keystore.location = null 14:59:57 policy-pap | ssl.keystore.password = null 14:59:57 policy-pap | ssl.keystore.type = JKS 14:59:57 policy-pap | ssl.protocol = TLSv1.3 14:59:57 policy-pap | ssl.provider = null 14:59:57 policy-pap | ssl.secure.random.implementation = null 14:59:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-pap | ssl.truststore.certificates = null 14:59:57 policy-pap | ssl.truststore.location = null 14:59:57 policy-pap | ssl.truststore.password = null 14:59:57 policy-pap | ssl.truststore.type = JKS 14:59:57 policy-pap | transaction.timeout.ms = 60000 14:59:57 policy-pap | transactional.id = null 14:59:57 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:12.815+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-pap | [2025-06-13T14:57:12.825+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:59:57 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632839 14:59:57 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4bb38c68-3eb8-4da3-9a8e-86958093f792, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:59:57 policy-pap | [2025-06-13T14:57:12.839+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=792f5b7e-8121-456f-8173-aac7159e2ce8, alive=false, publisher=null]]: starting 14:59:57 policy-pap | [2025-06-13T14:57:12.840+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:59:57 policy-pap | acks = -1 14:59:57 policy-pap | auto.include.jmx.reporter = true 14:59:57 policy-pap | batch.size = 16384 14:59:57 policy-pap | bootstrap.servers = [kafka:9092] 14:59:57 policy-pap | buffer.memory = 33554432 14:59:57 policy-pap | client.dns.lookup = use_all_dns_ips 14:59:57 policy-pap | client.id = producer-2 14:59:57 policy-pap | compression.gzip.level = -1 14:59:57 policy-pap | compression.lz4.level = 9 14:59:57 policy-pap | compression.type = none 14:59:57 policy-pap | compression.zstd.level = 3 14:59:57 policy-pap | connections.max.idle.ms = 540000 14:59:57 policy-pap | delivery.timeout.ms = 120000 14:59:57 policy-pap | enable.idempotence = true 14:59:57 policy-pap | enable.metrics.push = true 14:59:57 policy-pap | interceptor.classes = [] 14:59:57 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:59:57 policy-pap | linger.ms = 0 14:59:57 policy-pap | max.block.ms = 60000 14:59:57 policy-pap | max.in.flight.requests.per.connection = 5 14:59:57 policy-pap | max.request.size = 1048576 14:59:57 policy-pap | metadata.max.age.ms = 300000 14:59:57 policy-pap | metadata.max.idle.ms = 300000 14:59:57 policy-pap | metadata.recovery.strategy = none 14:59:57 policy-pap | metric.reporters = [] 14:59:57 policy-pap | metrics.num.samples = 2 14:59:57 policy-pap | metrics.recording.level = INFO 14:59:57 policy-pap | metrics.sample.window.ms = 30000 14:59:57 policy-pap | partitioner.adaptive.partitioning.enable = true 14:59:57 policy-pap | partitioner.availability.timeout.ms = 0 14:59:57 policy-pap | partitioner.class = null 14:59:57 policy-pap | partitioner.ignore.keys = false 14:59:57 policy-pap | receive.buffer.bytes = 32768 14:59:57 policy-pap | reconnect.backoff.max.ms = 1000 14:59:57 policy-pap | reconnect.backoff.ms = 50 14:59:57 policy-pap | request.timeout.ms = 30000 14:59:57 policy-pap | retries = 2147483647 14:59:57 policy-pap | retry.backoff.max.ms = 1000 14:59:57 policy-pap | retry.backoff.ms = 100 14:59:57 policy-pap | sasl.client.callback.handler.class = null 14:59:57 policy-pap | sasl.jaas.config = null 14:59:57 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-pap | sasl.kerberos.service.name = null 14:59:57 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-pap | sasl.login.callback.handler.class = null 14:59:57 policy-pap | sasl.login.class = null 14:59:57 policy-pap | sasl.login.connect.timeout.ms = null 14:59:57 policy-pap | sasl.login.read.timeout.ms = null 14:59:57 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-pap | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.login.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.mechanism = GSSAPI 14:59:57 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-pap | sasl.oauthbearer.expected.audience = null 14:59:57 policy-pap | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-pap | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-pap | security.protocol = PLAINTEXT 14:59:57 policy-pap | security.providers = null 14:59:57 policy-pap | send.buffer.bytes = 131072 14:59:57 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-pap | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-pap | ssl.cipher.suites = null 14:59:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-pap | ssl.endpoint.identification.algorithm = https 14:59:57 policy-pap | ssl.engine.factory.class = null 14:59:57 policy-pap | ssl.key.password = null 14:59:57 policy-pap | ssl.keymanager.algorithm = SunX509 14:59:57 policy-pap | ssl.keystore.certificate.chain = null 14:59:57 policy-pap | ssl.keystore.key = null 14:59:57 policy-pap | ssl.keystore.location = null 14:59:57 policy-pap | ssl.keystore.password = null 14:59:57 policy-pap | ssl.keystore.type = JKS 14:59:57 policy-pap | ssl.protocol = TLSv1.3 14:59:57 policy-pap | ssl.provider = null 14:59:57 policy-pap | ssl.secure.random.implementation = null 14:59:57 policy-pap | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-pap | ssl.truststore.certificates = null 14:59:57 policy-pap | ssl.truststore.location = null 14:59:57 policy-pap | ssl.truststore.password = null 14:59:57 policy-pap | ssl.truststore.type = JKS 14:59:57 policy-pap | transaction.timeout.ms = 60000 14:59:57 policy-pap | transactional.id = null 14:59:57 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:59:57 policy-pap | 14:59:57 policy-pap | [2025-06-13T14:57:12.840+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-pap | [2025-06-13T14:57:12.840+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 14:59:57 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826632846 14:59:57 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=792f5b7e-8121-456f-8173-aac7159e2ce8, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:59:57 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 14:59:57 policy-pap | [2025-06-13T14:57:12.846+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 14:59:57 policy-pap | [2025-06-13T14:57:12.847+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 14:59:57 policy-pap | [2025-06-13T14:57:12.847+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 14:59:57 policy-pap | [2025-06-13T14:57:12.850+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 14:59:57 policy-pap | [2025-06-13T14:57:12.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 14:59:57 policy-pap | [2025-06-13T14:57:12.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 14:59:57 policy-pap | [2025-06-13T14:57:12.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 14:59:57 policy-pap | [2025-06-13T14:57:12.852+00:00|INFO|TimerManager|Thread-9] timer manager update started 14:59:57 policy-pap | [2025-06-13T14:57:12.855+00:00|INFO|ServiceManager|main] Policy PAP started 14:59:57 policy-pap | [2025-06-13T14:57:12.856+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.597 seconds (process running for 10.201) 14:59:57 policy-pap | [2025-06-13T14:57:12.856+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 14:59:57 policy-pap | [2025-06-13T14:57:13.292+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 14:59:57 policy-pap | [2025-06-13T14:57:13.294+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: d-rF8NzzQdGshpvqUU-qrg 14:59:57 policy-pap | [2025-06-13T14:57:13.294+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Cluster ID: d-rF8NzzQdGshpvqUU-qrg 14:59:57 policy-pap | [2025-06-13T14:57:13.295+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: d-rF8NzzQdGshpvqUU-qrg 14:59:57 policy-pap | [2025-06-13T14:57:13.321+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 14:59:57 policy-pap | [2025-06-13T14:57:13.321+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 14:59:57 policy-pap | [2025-06-13T14:57:13.340+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:59:57 policy-pap | [2025-06-13T14:57:13.340+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: d-rF8NzzQdGshpvqUU-qrg 14:59:57 policy-pap | [2025-06-13T14:57:13.462+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:59:57 policy-pap | [2025-06-13T14:57:13.495+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:59:57 policy-pap | [2025-06-13T14:57:14.138+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:59:57 policy-pap | [2025-06-13T14:57:14.144+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:59:57 policy-pap | [2025-06-13T14:57:14.177+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf 14:59:57 policy-pap | [2025-06-13T14:57:14.177+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:59:57 policy-pap | [2025-06-13T14:57:14.199+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:59:57 policy-pap | [2025-06-13T14:57:14.201+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] (Re-)joining group 14:59:57 policy-pap | [2025-06-13T14:57:14.211+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Request joining group due to: need to re-join with the given member-id: consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f 14:59:57 policy-pap | [2025-06-13T14:57:14.211+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] (Re-)joining group 14:59:57 policy-pap | [2025-06-13T14:57:17.203+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf', protocol='range'} 14:59:57 policy-pap | [2025-06-13T14:57:17.213+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf=Assignment(partitions=[policy-pdp-pap-0])} 14:59:57 policy-pap | [2025-06-13T14:57:17.218+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Successfully joined group with generation Generation{generationId=1, memberId='consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f', protocol='range'} 14:59:57 policy-pap | [2025-06-13T14:57:17.219+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Finished assignment for group at generation 1: {consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f=Assignment(partitions=[policy-pdp-pap-0])} 14:59:57 policy-pap | [2025-06-13T14:57:17.239+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-261fe284-02d9-42cb-944f-e72879472ebf', protocol='range'} 14:59:57 policy-pap | [2025-06-13T14:57:17.240+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:59:57 policy-pap | [2025-06-13T14:57:17.242+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 14:59:57 policy-pap | [2025-06-13T14:57:17.243+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Successfully synced group in generation Generation{generationId=1, memberId='consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3-642f12ca-8684-4142-b46a-360148203c2f', protocol='range'} 14:59:57 policy-pap | [2025-06-13T14:57:17.243+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:59:57 policy-pap | [2025-06-13T14:57:17.243+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Adding newly assigned partitions: policy-pdp-pap-0 14:59:57 policy-pap | [2025-06-13T14:57:17.255+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Found no committed offset for partition policy-pdp-pap-0 14:59:57 policy-pap | [2025-06-13T14:57:17.255+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 14:59:57 policy-pap | [2025-06-13T14:57:17.273+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:59:57 policy-pap | [2025-06-13T14:57:17.273+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-044ad9e7-7f73-4e67-ada5-d3c6274784bc-3, groupId=044ad9e7-7f73-4e67-ada5-d3c6274784bc] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:59:57 policy-pap | [2025-06-13T14:57:19.253+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 14:59:57 policy-pap | [] 14:59:57 policy-pap | [2025-06-13T14:57:19.254+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} 14:59:57 policy-pap | [2025-06-13T14:57:19.254+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} 14:59:57 policy-pap | [2025-06-13T14:57:19.257+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_TOPIC_CHECK 14:59:57 policy-pap | [2025-06-13T14:57:19.257+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK 14:59:57 policy-pap | [2025-06-13T14:57:19.274+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} 14:59:57 policy-pap | [2025-06-13T14:57:19.279+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:59:57 policy-pap | [2025-06-13T14:57:19.287+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} 14:59:57 policy-pap | [2025-06-13T14:57:19.869+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting 14:59:57 policy-pap | [2025-06-13T14:57:19.869+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener 14:59:57 policy-pap | [2025-06-13T14:57:19.869+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer 14:59:57 policy-pap | [2025-06-13T14:57:19.870+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] 14:59:57 policy-pap | [2025-06-13T14:57:19.871+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] 14:59:57 policy-pap | [2025-06-13T14:57:19.871+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue 14:59:57 policy-pap | [2025-06-13T14:57:19.871+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started 14:59:57 policy-pap | [2025-06-13T14:57:19.877+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:19.923+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:19.923+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:57:19.927+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:19.929+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:57:20.044+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.044+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.045+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 243683ae-56ab-4597-926a-fcce27e0e31d 14:59:57 policy-pap | [2025-06-13T14:57:20.045+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping 14:59:57 policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue 14:59:57 policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer 14:59:57 policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] 14:59:57 policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener 14:59:57 policy-pap | [2025-06-13T14:57:20.046+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e start publishing next request 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting listener 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting timer 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] 14:59:57 policy-pap | [2025-06-13T14:57:20.059+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange starting enqueue 14:59:57 policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange started 14:59:57 policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] 14:59:57 policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 14:59:57 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.Naming","policy-type-version":"1.0.0","policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 14:59:57 policy-pap | [2025-06-13T14:57:20.060+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.082+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 14:59:57 policy-pap | [2025-06-13T14:57:20.384+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.385+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:59:57 policy-pap | [2025-06-13T14:57:20.388+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.389+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 14:59:57 policy-pap | [2025-06-13T14:57:20.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping 14:59:57 policy-pap | [2025-06-13T14:57:20.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping enqueue 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping timer 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopping listener 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange stopped 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpStateChange successful 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e start publishing next request 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=63c49f14-f1a0-4743-8e01-8dc98e4cfb41, expireMs=1749826670625] 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started 14:59:57 policy-pap | [2025-06-13T14:57:20.625+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.630+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.630+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 14:59:57 policy-pap | [2025-06-13T14:57:20.635+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.635+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d49dcaf6-23f5-41e2-86f9-c004bd57c4bb 14:59:57 policy-pap | [2025-06-13T14:57:20.639+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.639+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:57:20.638+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:57:20.651+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.652+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63c49f14-f1a0-4743-8e01-8dc98e4cfb41 14:59:57 policy-pap | [2025-06-13T14:57:20.658+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping 14:59:57 policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue 14:59:57 policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer 14:59:57 policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63c49f14-f1a0-4743-8e01-8dc98e4cfb41, expireMs=1749826670625] 14:59:57 policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener 14:59:57 policy-pap | [2025-06-13T14:57:20.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped 14:59:57 policy-pap | [2025-06-13T14:57:20.664+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful 14:59:57 policy-pap | [2025-06-13T14:57:20.664+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e has no more requests 14:59:57 policy-pap | [2025-06-13T14:57:41.622+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:59:57 policy-pap | [2025-06-13T14:57:41.622+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 14:59:57 policy-pap | [2025-06-13T14:57:41.625+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 14:59:57 policy-pap | [2025-06-13T14:57:49.870+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=243683ae-56ab-4597-926a-fcce27e0e31d, expireMs=1749826669870] 14:59:57 policy-pap | [2025-06-13T14:57:50.059+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, expireMs=1749826670059] 14:59:57 policy-pap | [2025-06-13T14:58:29.575+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:29.576+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy onap.restart.tca 1.0.0 to subgroup defaultGroup xacml count=2 14:59:57 policy-pap | [2025-06-13T14:58:29.577+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy onap.restart.tca 1.0.0 14:59:57 policy-pap | [2025-06-13T14:58:29.578+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e defaultGroup xacml policies=1 14:59:57 policy-pap | [2025-06-13T14:58:29.578+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|SessionData|http-nio-6969-exec-3] use cached group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy OSDF_CASABLANCA.Affinity_Default 1.0.0 to subgroup defaultGroup xacml count=3 14:59:57 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy OSDF_CASABLANCA.Affinity_Default 1.0.0 14:59:57 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|SessionData|http-nio-6969-exec-3] add update xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e defaultGroup xacml policies=2 14:59:57 policy-pap | [2025-06-13T14:58:29.625+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:29.626+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:29.644+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T14:58:29Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=OSDF_CASABLANCA.Affinity_Default 1.0.0, action=DEPLOYMENT, timestamp=2025-06-13T14:58:29Z, user=policyadmin)] 14:59:57 policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting 14:59:57 policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener 14:59:57 policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer 14:59:57 policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] 14:59:57 policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue 14:59:57 policy-pap | [2025-06-13T14:58:29.674+00:00|INFO|ServiceManager|http-nio-6969-exec-3] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started 14:59:57 policy-pap | [2025-06-13T14:58:29.675+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] 14:59:57 policy-pap | [2025-06-13T14:58:29.675+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:29.685+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:29.685+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:58:29.686+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:29.686+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:58:30.211+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:30.212+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6a5c2c9f-6c22-44fe-904b-515d314bb708 14:59:57 policy-pap | [2025-06-13T14:58:30.219+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping 14:59:57 policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue 14:59:57 policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer 14:59:57 policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] 14:59:57 policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener 14:59:57 policy-pap | [2025-06-13T14:58:30.220+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped 14:59:57 policy-pap | [2025-06-13T14:58:30.228+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful 14:59:57 policy-pap | [2025-06-13T14:58:30.229+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e has no more requests 14:59:57 policy-pap | [2025-06-13T14:58:30.229+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 14:59:57 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0},{"policy-type":"onap.policies.optimization.resource.AffinityPolicy","policy-type-version":"1.0.0","policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 14:59:57 policy-pap | [2025-06-13T14:58:54.336+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup defaultGroup xacml count=2 14:59:57 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 14:59:57 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|SessionData|http-nio-6969-exec-5] add update xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e defaultGroup xacml policies=0 14:59:57 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:54.338+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group defaultGroup 14:59:57 policy-pap | [2025-06-13T14:58:54.352+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=defaultGroup, pdpType=xacml, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2025-06-13T14:58:54Z, user=policyadmin)] 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting listener 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting timer 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=cb526d69-01bf-4ec2-b43b-e5796b06e4c5, expireMs=1749826764365] 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate starting enqueue 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|ServiceManager|http-nio-6969-exec-5] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate started 14:59:57 policy-pap | [2025-06-13T14:58:54.365+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:54.374+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:59:57 policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping 14:59:57 policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping enqueue 14:59:57 policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping timer 14:59:57 policy-pap | [2025-06-13T14:58:54.382+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=cb526d69-01bf-4ec2-b43b-e5796b06e4c5, expireMs=1749826764365] 14:59:57 policy-pap | [2025-06-13T14:58:54.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopping listener 14:59:57 policy-pap | [2025-06-13T14:58:54.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate stopped 14:59:57 policy-pap | [2025-06-13T14:58:54.384+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:58:54.385+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cb526d69-01bf-4ec2-b43b-e5796b06e4c5 14:59:57 policy-pap | [2025-06-13T14:58:54.400+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e PdpUpdate successful 14:59:57 policy-pap | [2025-06-13T14:58:54.400+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e has no more requests 14:59:57 policy-pap | [2025-06-13T14:58:54.401+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 14:59:57 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.monitoring.tcagen2","policy-type-version":"1.0.0","policy-id":"onap.restart.tca","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}]} 14:59:57 policy-pap | [2025-06-13T14:58:59.675+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=6a5c2c9f-6c22-44fe-904b-515d314bb708, expireMs=1749826739674] 14:59:57 policy-pap | [2025-06-13T14:59:12.852+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 14:59:57 policy-pap | [2025-06-13T14:59:20.060+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:59:20.061+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-pap | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-pap | [2025-06-13T14:59:20.062+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:59:57 policy-xacml-pdp | Waiting for pap port 6969... 14:59:57 policy-xacml-pdp | pap (172.17.0.9:6969) open 14:59:57 policy-xacml-pdp | Waiting for kafka port 9092... 14:59:57 policy-xacml-pdp | kafka (172.17.0.5:9092) open 14:59:57 policy-xacml-pdp | + KEYSTORE=/opt/app/policy/pdpx/etc/ssl/policy-keystore 14:59:57 policy-xacml-pdp | + TRUSTSTORE=/opt/app/policy/pdpx/etc/ssl/policy-truststore 14:59:57 policy-xacml-pdp | + KEYSTORE_PASSWD=Pol1cy_0nap 14:59:57 policy-xacml-pdp | + TRUSTSTORE_PASSWD=Pol1cy_0nap 14:59:57 policy-xacml-pdp | + '[' 0 -ge 1 ] 14:59:57 policy-xacml-pdp | + CONFIG_FILE= 14:59:57 policy-xacml-pdp | + '[' -z ] 14:59:57 policy-xacml-pdp | + CONFIG_FILE=/opt/app/policy/pdpx/etc/defaultConfig.json 14:59:57 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-truststore ] 14:59:57 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/policy-keystore ] 14:59:57 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/xacml.properties ] 14:59:57 policy-xacml-pdp | + '[' -f /opt/app/policy/pdpx/etc/mounted/logback.xml ] 14:59:57 policy-xacml-pdp | + echo 'Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json' 14:59:57 policy-xacml-pdp | Policy Xacml PDP config file: /opt/app/policy/pdpx/etc/defaultConfig.json 14:59:57 policy-xacml-pdp | + /usr/lib/jvm/default-jvm/bin/java -cp '/opt/app/policy/pdpx/etc:/opt/app/policy/pdpx/lib/*' '-Dlogback.configurationFile=/opt/app/policy/pdpx/etc/logback.xml' '-Djavax.net.ssl.keyStore=/opt/app/policy/pdpx/etc/ssl/policy-keystore' '-Djavax.net.ssl.keyStorePassword=Pol1cy_0nap' '-Djavax.net.ssl.trustStore=/opt/app/policy/pdpx/etc/ssl/policy-truststore' '-Djavax.net.ssl.trustStorePassword=Pol1cy_0nap' org.onap.policy.pdpx.main.startstop.Main -c /opt/app/policy/pdpx/etc/defaultConfig.json 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:13.976+00:00|INFO|Main|main] Starting policy xacml pdp service with arguments - [-c, /opt/app/policy/pdpx/etc/defaultConfig.json] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.108+00:00|INFO|XacmlPdpActivator|main] Activator initializing using org.onap.policy.pdpx.main.parameters.XacmlPdpParameterGroup@37858383 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.164+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:59:57 policy-xacml-pdp | allow.auto.create.topics = true 14:59:57 policy-xacml-pdp | auto.commit.interval.ms = 5000 14:59:57 policy-xacml-pdp | auto.include.jmx.reporter = true 14:59:57 policy-xacml-pdp | auto.offset.reset = latest 14:59:57 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 14:59:57 policy-xacml-pdp | check.crcs = true 14:59:57 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 14:59:57 policy-xacml-pdp | client.id = consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-1 14:59:57 policy-xacml-pdp | client.rack = 14:59:57 policy-xacml-pdp | connections.max.idle.ms = 540000 14:59:57 policy-xacml-pdp | default.api.timeout.ms = 60000 14:59:57 policy-xacml-pdp | enable.auto.commit = true 14:59:57 policy-xacml-pdp | enable.metrics.push = true 14:59:57 policy-xacml-pdp | exclude.internal.topics = true 14:59:57 policy-xacml-pdp | fetch.max.bytes = 52428800 14:59:57 policy-xacml-pdp | fetch.max.wait.ms = 500 14:59:57 policy-xacml-pdp | fetch.min.bytes = 1 14:59:57 policy-xacml-pdp | group.id = bcceede6-cf80-4e3b-b200-9e273dce58d5 14:59:57 policy-xacml-pdp | group.instance.id = null 14:59:57 policy-xacml-pdp | group.protocol = classic 14:59:57 policy-xacml-pdp | group.remote.assignor = null 14:59:57 policy-xacml-pdp | heartbeat.interval.ms = 3000 14:59:57 policy-xacml-pdp | interceptor.classes = [] 14:59:57 policy-xacml-pdp | internal.leave.group.on.close = true 14:59:57 policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:59:57 policy-xacml-pdp | isolation.level = read_uncommitted 14:59:57 policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-xacml-pdp | max.partition.fetch.bytes = 1048576 14:59:57 policy-xacml-pdp | max.poll.interval.ms = 300000 14:59:57 policy-xacml-pdp | max.poll.records = 500 14:59:57 policy-xacml-pdp | metadata.max.age.ms = 300000 14:59:57 policy-xacml-pdp | metadata.recovery.strategy = none 14:59:57 policy-xacml-pdp | metric.reporters = [] 14:59:57 policy-xacml-pdp | metrics.num.samples = 2 14:59:57 policy-xacml-pdp | metrics.recording.level = INFO 14:59:57 policy-xacml-pdp | metrics.sample.window.ms = 30000 14:59:57 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:59:57 policy-xacml-pdp | receive.buffer.bytes = 65536 14:59:57 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 14:59:57 policy-xacml-pdp | reconnect.backoff.ms = 50 14:59:57 policy-xacml-pdp | request.timeout.ms = 30000 14:59:57 policy-xacml-pdp | retry.backoff.max.ms = 1000 14:59:57 policy-xacml-pdp | retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.client.callback.handler.class = null 14:59:57 policy-xacml-pdp | sasl.jaas.config = null 14:59:57 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-xacml-pdp | sasl.kerberos.service.name = null 14:59:57 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-xacml-pdp | sasl.login.callback.handler.class = null 14:59:57 policy-xacml-pdp | sasl.login.class = null 14:59:57 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 14:59:57 policy-xacml-pdp | sasl.login.read.timeout.ms = null 14:59:57 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.mechanism = GSSAPI 14:59:57 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-xacml-pdp | security.protocol = PLAINTEXT 14:59:57 policy-xacml-pdp | security.providers = null 14:59:57 policy-xacml-pdp | send.buffer.bytes = 131072 14:59:57 policy-xacml-pdp | session.timeout.ms = 45000 14:59:57 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-xacml-pdp | ssl.cipher.suites = null 14:59:57 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 14:59:57 policy-xacml-pdp | ssl.engine.factory.class = null 14:59:57 policy-xacml-pdp | ssl.key.password = null 14:59:57 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 14:59:57 policy-xacml-pdp | ssl.keystore.certificate.chain = null 14:59:57 policy-xacml-pdp | ssl.keystore.key = null 14:59:57 policy-xacml-pdp | ssl.keystore.location = null 14:59:57 policy-xacml-pdp | ssl.keystore.password = null 14:59:57 policy-xacml-pdp | ssl.keystore.type = JKS 14:59:57 policy-xacml-pdp | ssl.protocol = TLSv1.3 14:59:57 policy-xacml-pdp | ssl.provider = null 14:59:57 policy-xacml-pdp | ssl.secure.random.implementation = null 14:59:57 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-xacml-pdp | ssl.truststore.certificates = null 14:59:57 policy-xacml-pdp | ssl.truststore.location = null 14:59:57 policy-xacml-pdp | ssl.truststore.password = null 14:59:57 policy-xacml-pdp | ssl.truststore.type = JKS 14:59:57 policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.223+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.366+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.366+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.366+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826634365 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.369+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-1, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Subscribed to topic(s): policy-pdp-pap 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.431+00:00|INFO|XacmlPdpApplicationManager|main] Initialization applications org.onap.policy.pdpx.main.parameters.XacmlApplicationParameters@7ec3394b JerseyClient(name=policyApiParameters, https=false, selfSignedCerts=false, hostname=policy-api, port=6969, basePath=null, userName=policyadmin, password=zb!XztG34, client=org.glassfish.jersey.client.JerseyClient@698122b2, baseUrl=http://policy-api:6969/, alive=true) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.443+00:00|INFO|XacmlPdpApplicationManager|main] Application guard supports [onap.policies.controlloop.guard.common.FrequencyLimiter 1.0.0, onap.policies.controlloop.guard.common.MinMax 1.0.0, onap.policies.controlloop.guard.common.Blacklist 1.0.0, onap.policies.controlloop.guard.common.Filter 1.0.0, onap.policies.controlloop.guard.coordination.FirstBlocksSecond 1.0.0] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.443+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath guard at this path /opt/app/policy/pdpx/apps/guard 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.443+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/guard 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.444+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/guard/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 14:59:57 policy-xacml-pdp | {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.persistenceunit -> OperationsHistoryPU 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.name -> GetOperationOutcome 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.description -> Returns operation outcome 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.description -> Returns operation counts based on time window 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.password -> policy_user 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.issuer -> urn:org:onap:xacml:guard:get-operation-outcome 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.persistenceunit -> OperationsHistoryPU 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.445+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.driver -> org.postgresql.Driver 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.name -> CountRecentOperations 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.url -> jdbc:postgresql://postgres:5432/operationshistory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] jakarta.persistence.jdbc.user -> policy_user 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] count-recent-operations.issuer -> urn:org:onap:xacml:guard:count-recent-operations 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.pip.engines -> count-recent-operations,get-operation-outcome 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|XacmlPolicyUtils|main] get-operation-outcome.classname -> org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.446+00:00|INFO|StdXacmlApplicationServiceProvider|main] {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.448+00:00|WARN|XACMLProperties|main] Properties file /usr/lib/jvm/java-17-openjdk/lib/xacml.properties cannot be read. 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|XacmlPdpApplicationManager|main] Application optimization supports [onap.policies.optimization.resource.AffinityPolicy 1.0.0, onap.policies.optimization.resource.DistancePolicy 1.0.0, onap.policies.optimization.resource.HpaPolicy 1.0.0, onap.policies.optimization.resource.OptimizationPolicy 1.0.0, onap.policies.optimization.resource.PciPolicy 1.0.0, onap.policies.optimization.service.QueryPolicy 1.0.0, onap.policies.optimization.service.SubscriberPolicy 1.0.0, onap.policies.optimization.resource.Vim_fit 1.0.0, onap.policies.optimization.resource.VnfPolicy 1.0.0] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath optimization at this path /opt/app/policy/pdpx/apps/optimization 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/optimization 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.476+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/optimization/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 14:59:57 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.477+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPdpApplicationManager|main] Application naming supports [onap.policies.Naming 1.0.0] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath naming at this path /opt/app/policy/pdpx/apps/naming 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/naming 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/naming/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.478+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 14:59:57 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.479+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.481+00:00|INFO|XacmlPdpApplicationManager|main] Application native supports [onap.policies.native.Xacml 1.0.0, onap.policies.native.ToscaXacml 1.0.0] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath native at this path /opt/app/policy/pdpx/apps/native 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/native 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/native/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 14:59:57 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.482+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPdpApplicationManager|main] Application match supports [onap.policies.Match 1.0.0] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath match at this path /opt/app/policy/pdpx/apps/match 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/match 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/match/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 14:59:57 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.483+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.484+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPdpApplicationManager|main] Application monitoring supports [onap.Monitoring 1.0.0] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPdpApplicationManager|main] initializeApplicationPath monitoring at this path /opt/app/policy/pdpx/apps/monitoring 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|StdXacmlApplicationServiceProvider|main] New Path is /opt/app/policy/pdpx/apps/monitoring 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] Loading xacml properties /opt/app/policy/pdpx/apps/monitoring/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] Loaded xacml properties 14:59:57 policy-xacml-pdp | {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.rootPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.evaluationContextFactory -> com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.combiningAlgorithmFactory -> com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.pepEngineFactory -> com.att.research.xacml.std.pep.StdEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.dataTypeFactory -> com.att.research.xacml.std.StdDataTypeFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory.combineRootPolicies -> urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.referencedPolicies -> 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.att.policyFinderFactory -> org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.pdpEngineFactory -> com.att.research.xacmlatt.pdp.ATTPDPEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.485+00:00|INFO|XacmlPolicyUtils|main] xacml.traceEngineFactory -> com.att.research.xacml.std.trace.LoggingTraceEngineFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|XacmlPolicyUtils|main] xacml.pipFinderFactory -> com.att.research.xacml.std.pip.StdPIPFinderFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|XacmlPolicyUtils|main] xacml.att.functionDefinitionFactory -> com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|StdXacmlApplicationServiceProvider|main] {xacml.rootPolicies=, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.referencedPolicies=, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.486+00:00|INFO|XacmlPdpApplicationManager|main] Finished applications initialization {optimize=org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplication@2b95e48b, native=org.onap.policy.xacml.pdp.application.nativ.NativePdpApplication@4a3329b9, guard=org.onap.policy.xacml.pdp.application.guard.GuardPdpApplication@3dddefd8, naming=org.onap.policy.xacml.pdp.application.naming.NamingPdpApplication@160ac7fb, match=org.onap.policy.xacml.pdp.application.match.MatchPdpApplication@12bfd80d, configure=org.onap.policy.xacml.pdp.application.monitoring.MonitoringPdpApplication@41925502} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.503+00:00|INFO|XacmlPdpHearbeatPublisher|main] heartbeat topic probe 4000ms 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.694+00:00|INFO|ServiceManager|main] service manager starting 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.694+00:00|INFO|ServiceManager|main] service manager starting XACML PDP parameters 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.695+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.695+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5f574cc2 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.708+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.709+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:59:57 policy-xacml-pdp | allow.auto.create.topics = true 14:59:57 policy-xacml-pdp | auto.commit.interval.ms = 5000 14:59:57 policy-xacml-pdp | auto.include.jmx.reporter = true 14:59:57 policy-xacml-pdp | auto.offset.reset = latest 14:59:57 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 14:59:57 policy-xacml-pdp | check.crcs = true 14:59:57 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 14:59:57 policy-xacml-pdp | client.id = consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2 14:59:57 policy-xacml-pdp | client.rack = 14:59:57 policy-xacml-pdp | connections.max.idle.ms = 540000 14:59:57 policy-xacml-pdp | default.api.timeout.ms = 60000 14:59:57 policy-xacml-pdp | enable.auto.commit = true 14:59:57 policy-xacml-pdp | enable.metrics.push = true 14:59:57 policy-xacml-pdp | exclude.internal.topics = true 14:59:57 policy-xacml-pdp | fetch.max.bytes = 52428800 14:59:57 policy-xacml-pdp | fetch.max.wait.ms = 500 14:59:57 policy-xacml-pdp | fetch.min.bytes = 1 14:59:57 policy-xacml-pdp | group.id = bcceede6-cf80-4e3b-b200-9e273dce58d5 14:59:57 policy-xacml-pdp | group.instance.id = null 14:59:57 policy-xacml-pdp | group.protocol = classic 14:59:57 policy-xacml-pdp | group.remote.assignor = null 14:59:57 policy-xacml-pdp | heartbeat.interval.ms = 3000 14:59:57 policy-xacml-pdp | interceptor.classes = [] 14:59:57 policy-xacml-pdp | internal.leave.group.on.close = true 14:59:57 policy-xacml-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:59:57 policy-xacml-pdp | isolation.level = read_uncommitted 14:59:57 policy-xacml-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-xacml-pdp | max.partition.fetch.bytes = 1048576 14:59:57 policy-xacml-pdp | max.poll.interval.ms = 300000 14:59:57 policy-xacml-pdp | max.poll.records = 500 14:59:57 policy-xacml-pdp | metadata.max.age.ms = 300000 14:59:57 policy-xacml-pdp | metadata.recovery.strategy = none 14:59:57 policy-xacml-pdp | metric.reporters = [] 14:59:57 policy-xacml-pdp | metrics.num.samples = 2 14:59:57 policy-xacml-pdp | metrics.recording.level = INFO 14:59:57 policy-xacml-pdp | metrics.sample.window.ms = 30000 14:59:57 policy-xacml-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:59:57 policy-xacml-pdp | receive.buffer.bytes = 65536 14:59:57 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 14:59:57 policy-xacml-pdp | reconnect.backoff.ms = 50 14:59:57 policy-xacml-pdp | request.timeout.ms = 30000 14:59:57 policy-xacml-pdp | retry.backoff.max.ms = 1000 14:59:57 policy-xacml-pdp | retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.client.callback.handler.class = null 14:59:57 policy-xacml-pdp | sasl.jaas.config = null 14:59:57 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-xacml-pdp | sasl.kerberos.service.name = null 14:59:57 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-xacml-pdp | sasl.login.callback.handler.class = null 14:59:57 policy-xacml-pdp | sasl.login.class = null 14:59:57 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 14:59:57 policy-xacml-pdp | sasl.login.read.timeout.ms = null 14:59:57 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.mechanism = GSSAPI 14:59:57 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-xacml-pdp | security.protocol = PLAINTEXT 14:59:57 policy-xacml-pdp | security.providers = null 14:59:57 policy-xacml-pdp | send.buffer.bytes = 131072 14:59:57 policy-xacml-pdp | session.timeout.ms = 45000 14:59:57 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-xacml-pdp | ssl.cipher.suites = null 14:59:57 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 14:59:57 policy-xacml-pdp | ssl.engine.factory.class = null 14:59:57 policy-xacml-pdp | ssl.key.password = null 14:59:57 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 14:59:57 policy-xacml-pdp | ssl.keystore.certificate.chain = null 14:59:57 policy-xacml-pdp | ssl.keystore.key = null 14:59:57 policy-xacml-pdp | ssl.keystore.location = null 14:59:57 policy-xacml-pdp | ssl.keystore.password = null 14:59:57 policy-xacml-pdp | ssl.keystore.type = JKS 14:59:57 policy-xacml-pdp | ssl.protocol = TLSv1.3 14:59:57 policy-xacml-pdp | ssl.provider = null 14:59:57 policy-xacml-pdp | ssl.secure.random.implementation = null 14:59:57 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-xacml-pdp | ssl.truststore.certificates = null 14:59:57 policy-xacml-pdp | ssl.truststore.location = null 14:59:57 policy-xacml-pdp | ssl.truststore.password = null 14:59:57 policy-xacml-pdp | ssl.truststore.type = JKS 14:59:57 policy-xacml-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.710+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.721+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.721+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.721+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826634721 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.722+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Subscribed to topic(s): policy-pdp-pap 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.722+00:00|INFO|ServiceManager|main] service manager starting topics 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.723+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.723+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e0036ebe-920b-4b5e-8391-fea799397d17, alive=false, publisher=null]]: starting 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.733+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:59:57 policy-xacml-pdp | acks = -1 14:59:57 policy-xacml-pdp | auto.include.jmx.reporter = true 14:59:57 policy-xacml-pdp | batch.size = 16384 14:59:57 policy-xacml-pdp | bootstrap.servers = [kafka:9092] 14:59:57 policy-xacml-pdp | buffer.memory = 33554432 14:59:57 policy-xacml-pdp | client.dns.lookup = use_all_dns_ips 14:59:57 policy-xacml-pdp | client.id = producer-1 14:59:57 policy-xacml-pdp | compression.gzip.level = -1 14:59:57 policy-xacml-pdp | compression.lz4.level = 9 14:59:57 policy-xacml-pdp | compression.type = none 14:59:57 policy-xacml-pdp | compression.zstd.level = 3 14:59:57 policy-xacml-pdp | connections.max.idle.ms = 540000 14:59:57 policy-xacml-pdp | delivery.timeout.ms = 120000 14:59:57 policy-xacml-pdp | enable.idempotence = true 14:59:57 policy-xacml-pdp | enable.metrics.push = true 14:59:57 policy-xacml-pdp | interceptor.classes = [] 14:59:57 policy-xacml-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:59:57 policy-xacml-pdp | linger.ms = 0 14:59:57 policy-xacml-pdp | max.block.ms = 60000 14:59:57 policy-xacml-pdp | max.in.flight.requests.per.connection = 5 14:59:57 policy-xacml-pdp | max.request.size = 1048576 14:59:57 policy-xacml-pdp | metadata.max.age.ms = 300000 14:59:57 policy-xacml-pdp | metadata.max.idle.ms = 300000 14:59:57 policy-xacml-pdp | metadata.recovery.strategy = none 14:59:57 policy-xacml-pdp | metric.reporters = [] 14:59:57 policy-xacml-pdp | metrics.num.samples = 2 14:59:57 policy-xacml-pdp | metrics.recording.level = INFO 14:59:57 policy-xacml-pdp | metrics.sample.window.ms = 30000 14:59:57 policy-xacml-pdp | partitioner.adaptive.partitioning.enable = true 14:59:57 policy-xacml-pdp | partitioner.availability.timeout.ms = 0 14:59:57 policy-xacml-pdp | partitioner.class = null 14:59:57 policy-xacml-pdp | partitioner.ignore.keys = false 14:59:57 policy-xacml-pdp | receive.buffer.bytes = 32768 14:59:57 policy-xacml-pdp | reconnect.backoff.max.ms = 1000 14:59:57 policy-xacml-pdp | reconnect.backoff.ms = 50 14:59:57 policy-xacml-pdp | request.timeout.ms = 30000 14:59:57 policy-xacml-pdp | retries = 2147483647 14:59:57 policy-xacml-pdp | retry.backoff.max.ms = 1000 14:59:57 policy-xacml-pdp | retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.client.callback.handler.class = null 14:59:57 policy-xacml-pdp | sasl.jaas.config = null 14:59:57 policy-xacml-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:59:57 policy-xacml-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:59:57 policy-xacml-pdp | sasl.kerberos.service.name = null 14:59:57 policy-xacml-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:59:57 policy-xacml-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:59:57 policy-xacml-pdp | sasl.login.callback.handler.class = null 14:59:57 policy-xacml-pdp | sasl.login.class = null 14:59:57 policy-xacml-pdp | sasl.login.connect.timeout.ms = null 14:59:57 policy-xacml-pdp | sasl.login.read.timeout.ms = null 14:59:57 policy-xacml-pdp | sasl.login.refresh.buffer.seconds = 300 14:59:57 policy-xacml-pdp | sasl.login.refresh.min.period.seconds = 60 14:59:57 policy-xacml-pdp | sasl.login.refresh.window.factor = 0.8 14:59:57 policy-xacml-pdp | sasl.login.refresh.window.jitter = 0.05 14:59:57 policy-xacml-pdp | sasl.login.retry.backoff.max.ms = 10000 14:59:57 policy-xacml-pdp | sasl.login.retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.mechanism = GSSAPI 14:59:57 policy-xacml-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:59:57 policy-xacml-pdp | sasl.oauthbearer.expected.audience = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.expected.issuer = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.header.urlencode = false 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:59:57 policy-xacml-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:59:57 policy-xacml-pdp | sasl.oauthbearer.scope.claim.name = scope 14:59:57 policy-xacml-pdp | sasl.oauthbearer.sub.claim.name = sub 14:59:57 policy-xacml-pdp | sasl.oauthbearer.token.endpoint.url = null 14:59:57 policy-xacml-pdp | security.protocol = PLAINTEXT 14:59:57 policy-xacml-pdp | security.providers = null 14:59:57 policy-xacml-pdp | send.buffer.bytes = 131072 14:59:57 policy-xacml-pdp | socket.connection.setup.timeout.max.ms = 30000 14:59:57 policy-xacml-pdp | socket.connection.setup.timeout.ms = 10000 14:59:57 policy-xacml-pdp | ssl.cipher.suites = null 14:59:57 policy-xacml-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:59:57 policy-xacml-pdp | ssl.endpoint.identification.algorithm = https 14:59:57 policy-xacml-pdp | ssl.engine.factory.class = null 14:59:57 policy-xacml-pdp | ssl.key.password = null 14:59:57 policy-xacml-pdp | ssl.keymanager.algorithm = SunX509 14:59:57 policy-xacml-pdp | ssl.keystore.certificate.chain = null 14:59:57 policy-xacml-pdp | ssl.keystore.key = null 14:59:57 policy-xacml-pdp | ssl.keystore.location = null 14:59:57 policy-xacml-pdp | ssl.keystore.password = null 14:59:57 policy-xacml-pdp | ssl.keystore.type = JKS 14:59:57 policy-xacml-pdp | ssl.protocol = TLSv1.3 14:59:57 policy-xacml-pdp | ssl.provider = null 14:59:57 policy-xacml-pdp | ssl.secure.random.implementation = null 14:59:57 policy-xacml-pdp | ssl.trustmanager.algorithm = PKIX 14:59:57 policy-xacml-pdp | ssl.truststore.certificates = null 14:59:57 policy-xacml-pdp | ssl.truststore.location = null 14:59:57 policy-xacml-pdp | ssl.truststore.password = null 14:59:57 policy-xacml-pdp | ssl.truststore.type = JKS 14:59:57 policy-xacml-pdp | transaction.timeout.ms = 60000 14:59:57 policy-xacml-pdp | transactional.id = null 14:59:57 policy-xacml-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.734+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.742+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826634762 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.762+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e0036ebe-920b-4b5e-8391-fea799397d17, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting Terminate PDP 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting Heartbeat Publisher 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting REST Server 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.772+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: registering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f9d572ae2e8@357358c2 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.772+00:00|INFO|SingleThreadedBusTopicSource|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=2]]]]: register: start not attempted 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.763+00:00|INFO|JettyServletServer|main] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}})]: STARTING 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.774+00:00|INFO|ServiceManager|main] service manager started 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.774+00:00|INFO|ServiceManager|main] service manager started 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.775+00:00|INFO|Main|main] Started policy-xacml-pdp service successfully. 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.775+00:00|INFO|OrderedServiceImpl|pool-2-thread-1] ***** OrderedServiceImpl implementers: 14:59:57 policy-xacml-pdp | [] 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.774+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [JerseyServlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=oejs.Server@38b972d7{STOPPED}[12.0.21,sto=0], context=oeje10s.ServletContextHandler@452c8a40{ROOT,/,b=null,a=STOPPED,h=oeje10s.SessionHandler@534243e4{STOPPED}}, connector=RestServerParameters@29006752{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet-6e9c413e==io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServlet@b94e35e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-a23a01d==org.glassfish.jersey.servlet.ServletContainer@d5e4ed96{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:<null>,STOPPED}})]: RUN 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:14.777+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.107+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Cluster ID: d-rF8NzzQdGshpvqUU-qrg 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.107+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: d-rF8NzzQdGshpvqUU-qrg 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.108+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.109+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.115+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] (Re-)joining group 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Request joining group due to: need to re-join with the given member-id: consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.132+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] (Re-)joining group 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.330+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:15.330+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11', protocol='range'} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.145+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Finished assignment for group at generation 1: {consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11=Assignment(partitions=[policy-pdp-pap-0])} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2-3e916d49-16d0-43a1-ba43-76e9f3720c11', protocol='range'} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.156+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Adding newly assigned partitions: policy-pdp-pap-0 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Found no committed offset for partition policy-pdp-pap-0 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:18.175+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bcceede6-cf80-4e3b-b200-9e273dce58d5-2, groupId=bcceede6-cf80-4e3b-b200-9e273dce58d5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.202+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.247+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"messageName":"PDP_TOPIC_CHECK","requestId":"e18e1fff-9deb-4367-a557-a7dc64389e1f","timestampMs":1749826634765,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.250+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_TOPIC_CHECK 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.251+00:00|INFO|BidirectionalTopicClient|KAFKA-source-policy-pdp-pap] topic policy-pdp-pap is ready; found matching message PdpTopicCheck(super=PdpMessage(messageName=PDP_TOPIC_CHECK, requestId=e18e1fff-9deb-4367-a557-a7dc64389e1f, timestampMs=1749826634765, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=null, pdpSubgroup=null)) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.256+00:00|INFO|TopicBase|pool-2-thread-1] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bcceede6-cf80-4e3b-b200-9e273dce58d5, consumerInstance=policy-xacml-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=2, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=1, locked=false, #topicListeners=2]]]]: unregistering org.onap.policy.common.message.bus.event.client.BidirectionalTopicClient$$Lambda$503/0x00007f9d572ae2e8@357358c2 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.258+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=8b7db761-4d49-42ed-9835-fab8afcf3c0a, timestampMs=1749826639257, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=null), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[], deploymentInstanceInfo=null, properties=null, response=null) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.264+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.282+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[],"messageName":"PDP_STATUS","requestId":"8b7db761-4d49-42ed-9835-fab8afcf3c0a","timestampMs":1749826639257,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.282+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.923+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"243683ae-56ab-4597-926a-fcce27e0e31d","timestampMs":1749826639850,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.931+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=243683ae-56ab-4597-926a-fcce27e0e31d, timestampMs=1749826639850, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.Naming, typeVersion=1.0.0, properties={policy-instance-name=ONAP_NF_NAMING_TIMESTAMP, naming-models=[{naming-type=VNF, naming-recipe=AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP, name-operation=to_lower_case(), naming-properties=[{property-name=AIC_CLOUD_REGION}, {property-name=CONSTANT, property-value=onap-nf}, {property-name=TIMESTAMP}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VNFC, naming-recipe=VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=ENTIRETY, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}, {property-name=NFC_NAMING_CODE}, {property-value=-, property-name=DELIMITER}]}, {naming-type=VF-MODULE, naming-recipe=VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE, name-operation=to_lower_case(), naming-properties=[{property-name=VNF_NAME}, {property-value=-, property-name=DELIMITER}, {property-name=VF_MODULE_LABEL}, {property-name=VF_MODULE_TYPE}, {property-name=SEQUENCE, increment-sequence={max=zzz, scope=PRECEEDING, start-value=1, length=3, increment=1, sequence-type=alpha-numeric}}]}]}))], policiesToBeUndeployed=[]) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:19.940+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP type: onap.policies.Naming weight: null policy: 14:59:57 policy-xacml-pdp | {"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.017+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 14:59:57 policy-xacml-pdp | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> 14:59:57 policy-xacml-pdp | <Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP" Version="1.0.0" RuleCombiningAlgId="urn:oasis:names:tc:xacml:1.0:rule-combining-algorithm:first-applicable"> 14:59:57 policy-xacml-pdp | <Target> 14:59:57 policy-xacml-pdp | <AnyOf> 14:59:57 policy-xacml-pdp | <AllOf> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:oasis:names:tc:xacml:1.0:resource:resource-id" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | </AllOf> 14:59:57 policy-xacml-pdp | <AllOf> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.Naming</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | </AllOf> 14:59:57 policy-xacml-pdp | <AllOf> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.Naming</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">1.0.0</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type-version" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | </AllOf> 14:59:57 policy-xacml-pdp | </AnyOf> 14:59:57 policy-xacml-pdp | </Target> 14:59:57 policy-xacml-pdp | <Rule RuleId="SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP:rule" Effect="Permit"> 14:59:57 policy-xacml-pdp | <Description>Default is to PERMIT if the policy matches.</Description> 14:59:57 policy-xacml-pdp | <Target/> 14:59:57 policy-xacml-pdp | <ObligationExpressions> 14:59:57 policy-xacml-pdp | <ObligationExpression ObligationId="urn:org:onap:rest:body" FulfillOn="Permit"> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policyid" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policycontent" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">{"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policytype" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.Naming</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | </ObligationExpression> 14:59:57 policy-xacml-pdp | </ObligationExpressions> 14:59:57 policy-xacml-pdp | </Rule> 14:59:57 policy-xacml-pdp | </Policy> 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.023+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | /opt/app/policy/pdpx/apps/naming/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.030+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP, policy-version=1.0.0} into application naming 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.031+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.037+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=55ff5c69-c399-49d4-a95f-d4c543d908a0, timestampMs=1749826640037, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=PASSIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.038+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.044+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"243683ae-56ab-4597-926a-fcce27e0e31d","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"587a891a-49c5-4bf1-8169-985183639997","timestampMs":1749826640030,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.045+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.053+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"PASSIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"55ff5c69-c399-49d4-a95f-d4c543d908a0","timestampMs":1749826640037,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.053+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.071+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","timestampMs":1749826639851,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.072+00:00|INFO|XacmlPdpStateChangeListener|KAFKA-source-policy-pdp-pap] PDP State Change message has been received from the PAP - PdpStateChange(super=PdpMessage(messageName=PDP_STATE_CHANGE, requestId=d49dcaf6-23f5-41e2-86f9-c004bd57c4bb, timestampMs=1749826639851, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, state=ACTIVE) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.073+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] set state of org.onap.policy.pdpx.main.XacmlState@1db4588b to ACTIVE 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.073+00:00|INFO|XacmlState|KAFKA-source-policy-pdp-pap] State change: ACTIVE - Starting rest controller 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.073+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.086+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","response":{"responseTo":"d49dcaf6-23f5-41e2-86f9-c004bd57c4bb","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"72e98077-ce68-4257-9fdb-7e7ad741339a","timestampMs":1749826640073,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.086+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.638+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","timestampMs":1749826640376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.639+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=63c49f14-f1a0-4743-8e01-8dc98e4cfb41, timestampMs=1749826640376, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=null, pdpHeartbeatIntervalMs=120000, policiesToBeDeployed=[], policiesToBeUndeployed=[]) 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.639+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.650+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"}],"response":{"responseTo":"63c49f14-f1a0-4743-8e01-8dc98e4cfb41","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"93d49d05-9ee7-4d6b-9028-491a1ccee074","timestampMs":1749826640639,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:20.650+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:35.675+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.4 - policyadmin [13/Jun/2025:14:57:35 +0000] "GET /metrics HTTP/1.1" 200 2118 "" "Prometheus/3.4.1" 14:59:57 policy-xacml-pdp | [2025-06-13T14:57:42.731+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.1 - - [13/Jun/2025:14:57:42 +0000] "GET / HTTP/1.1" 401 423 "" "curl/7.58.0" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:26.049+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:26 +0000] "GET /policy/pdpx/v1/healthcheck?null HTTP/1.1" 200 110 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:26.072+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:26 +0000] "GET /metrics?null HTTP/1.1" 200 2042 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.547+00:00|INFO|GuardTranslator|qtp2014233765-27] Converting Request DecisionRequest(onapName=Guard, onapComponent=Guard-component, onapInstance=Guard-component-instance, requestId=unique-request-guard-1, context=null, action=guard, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={guard={actor=APPC, operation=ModifyConfig, target=f17face5-69cb-4c88-9e0b-7426db7edddd, requestId=c7c6a4aa-bb61-4a15-b831-ba1472dd4a65, clname=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}}) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-dateTime 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-date 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:environment:current-time 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.567+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:timezone 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:vf-count 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-name 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-id 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.vnf-type 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:generic-vnf.nf-naming-code 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:vserver.vserver-id 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.568+00:00|WARN|RequestParser|qtp2014233765-27] Unable to extract attribute value from object: urn:org:onap:guard:target:cloud-region.cloud-region-id 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.573+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Constructed using properties {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.573+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Initializing OnapPolicyFinderFactory Properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.573+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Combining root policies with urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.579+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Root Policies: 1 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.579+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-27] Referenced Policies: 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.580+00:00|INFO|StdPolicyFinder|qtp2014233765-27] Updating policy map with policy efa1dcb1-71d0-4b50-b930-711c0f3c432e version 1.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.584+00:00|INFO|StdOnapPip|qtp2014233765-27] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.671+00:00|INFO|LogHelper|qtp2014233765-27] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.711+00:00|INFO|Version|qtp2014233765-27] HHH000412: Hibernate ORM core version 6.6.16.Final 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.733+00:00|INFO|RegionFactoryInitiator|qtp2014233765-27] HHH000026: Second-level cache disabled 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:27.875+00:00|WARN|pooling|qtp2014233765-27] HHH10001002: Using built-in connection pool (not intended for production use) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:28.105+00:00|INFO|pooling|qtp2014233765-27] HHH10001005: Database info: 14:59:57 policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] 14:59:57 policy-xacml-pdp | Database driver: org.postgresql.Driver 14:59:57 policy-xacml-pdp | Database version: 16.4 14:59:57 policy-xacml-pdp | Autocommit mode: false 14:59:57 policy-xacml-pdp | Isolation level: undefined/unknown 14:59:57 policy-xacml-pdp | Minimum pool size: 1 14:59:57 policy-xacml-pdp | Maximum pool size: 20 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:28.952+00:00|INFO|JtaPlatformInitiator|qtp2014233765-27] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:28.985+00:00|INFO|StdOnapPip|qtp2014233765-27] Configuring historyDb PIP {count-recent-operations.persistenceunit=OperationsHistoryPU, get-operation-outcome.name=GetOperationOutcome, xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:oasis:names:tc:xacml:3.0:policy-combining-algorithm:deny-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, count-recent-operations.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.CountRecentOperationsPip, get-operation-outcome.description=Returns operation outcome, count-recent-operations.description=Returns operation counts based on time window, jakarta.persistence.jdbc.password=policy_user, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, get-operation-outcome.issuer=urn:org:onap:xacml:guard:get-operation-outcome, get-operation-outcome.persistenceunit=OperationsHistoryPU, jakarta.persistence.jdbc.driver=org.postgresql.Driver, count-recent-operations.name=CountRecentOperations, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, jakarta.persistence.jdbc.url=jdbc:postgresql://postgres:5432/operationshistory, jakarta.persistence.jdbc.user=policy_user, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, count-recent-operations.issuer=urn:org:onap:xacml:guard:count-recent-operations, xacml.pip.engines=count-recent-operations,get-operation-outcome, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory, get-operation-outcome.classname=org.onap.policy.pdp.xacml.application.common.operationshistory.GetOperationOutcomePip} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:28.988+00:00|INFO|LogHelper|qtp2014233765-27] HHH000204: Processing PersistenceUnitInfo [name: OperationsHistoryPU] 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:28.990+00:00|INFO|RegionFactoryInitiator|qtp2014233765-27] HHH000026: Second-level cache disabled 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.007+00:00|WARN|pooling|qtp2014233765-27] HHH10001002: Using built-in connection pool (not intended for production use) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.043+00:00|INFO|pooling|qtp2014233765-27] HHH10001005: Database info: 14:59:57 policy-xacml-pdp | Database JDBC URL [jdbc:postgresql://postgres:5432/operationshistory] 14:59:57 policy-xacml-pdp | Database driver: org.postgresql.Driver 14:59:57 policy-xacml-pdp | Database version: 16.4 14:59:57 policy-xacml-pdp | Autocommit mode: false 14:59:57 policy-xacml-pdp | Isolation level: undefined/unknown 14:59:57 policy-xacml-pdp | Minimum pool size: 1 14:59:57 policy-xacml-pdp | Maximum pool size: 20 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.074+00:00|INFO|JtaPlatformInitiator|qtp2014233765-27] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.078+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-27] Elapsed Time: 1510ms 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.078+00:00|INFO|GuardTranslator|qtp2014233765-27] Converting Response {results=[{decision=NotApplicable,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=Guard-component-instance}],includeInResults=true}{attributeId=urn:org:onap:guard:request:request-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=unique-request-guard-1}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:guard:clname:clname-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a}],includeInResults=true}{attributeId=urn:org:onap:guard:actor:actor-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=APPC}],includeInResults=true}{attributeId=urn:org:onap:guard:operation:operation-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=ModifyConfig}],includeInResults=true}{attributeId=urn:org:onap:guard:target:target-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=f17face5-69cb-4c88-9e0b-7426db7edddd}],includeInResults=true}]}]}]} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.084+00:00|INFO|RequestLog|qtp2014233765-27] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:27 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 19 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}},{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6a5c2c9f-6c22-44fe-904b-515d314bb708","timestampMs":1749826709625,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.686+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=6a5c2c9f-6c22-44fe-904b-515d314bb708, timestampMs=1749826709625, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.monitoring.tcagen2, typeVersion=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}})), ToscaPolicy(super=ToscaWithTypeAndObjectProperties(type=onap.policies.optimization.resource.AffinityPolicy, typeVersion=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}))], policiesToBeUndeployed=[]) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.687+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: onap.restart.tca type: onap.policies.monitoring.tcagen2 weight: null policy: 14:59:57 policy-xacml-pdp | {"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.723+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 14:59:57 policy-xacml-pdp | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> 14:59:57 policy-xacml-pdp | <Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="onap.restart.tca" Version="1.0.0" RuleCombiningAlgId="urn:oasis:names:tc:xacml:1.0:rule-combining-algorithm:first-applicable"> 14:59:57 policy-xacml-pdp | <Target> 14:59:57 policy-xacml-pdp | <AnyOf> 14:59:57 policy-xacml-pdp | <AllOf> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.restart.tca</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:oasis:names:tc:xacml:1.0:resource:resource-id" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | </AllOf> 14:59:57 policy-xacml-pdp | <AllOf> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.monitoring.tcagen2</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | </AllOf> 14:59:57 policy-xacml-pdp | <AllOf> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.monitoring.tcagen2</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">1.0.0</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type-version" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Match> 14:59:57 policy-xacml-pdp | </AllOf> 14:59:57 policy-xacml-pdp | </AnyOf> 14:59:57 policy-xacml-pdp | </Target> 14:59:57 policy-xacml-pdp | <Rule RuleId="onap.restart.tca:rule" Effect="Permit"> 14:59:57 policy-xacml-pdp | <Description>Default is to PERMIT if the policy matches.</Description> 14:59:57 policy-xacml-pdp | <Target/> 14:59:57 policy-xacml-pdp | <ObligationExpressions> 14:59:57 policy-xacml-pdp | <ObligationExpression ObligationId="urn:org:onap:rest:body" FulfillOn="Permit"> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policyid" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.restart.tca</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policycontent" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">{"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policytype" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.monitoring.tcagen2</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | </ObligationExpression> 14:59:57 policy-xacml-pdp | </ObligationExpressions> 14:59:57 policy-xacml-pdp | </Rule> 14:59:57 policy-xacml-pdp | </Policy> 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.723+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.724+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} into application monitoring 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.724+00:00|INFO|OptimizationPdpApplication|KAFKA-source-policy-pdp-pap] optimization can support onap.policies.optimization.resource.AffinityPolicy 1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.725+00:00|ERROR|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] PolicyType not found in data area yet /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml 14:59:57 policy-xacml-pdp | java.nio.file.NoSuchFileException: /opt/app/policy/pdpx/apps/optimization/onap.policies.optimization.resource.AffinityPolicy-1.0.0.yaml 14:59:57 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) 14:59:57 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) 14:59:57 policy-xacml-pdp | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) 14:59:57 policy-xacml-pdp | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) 14:59:57 policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:380) 14:59:57 policy-xacml-pdp | at java.base/java.nio.file.Files.newByteChannel(Files.java:432) 14:59:57 policy-xacml-pdp | at java.base/java.nio.file.Files.readAllBytes(Files.java:3288) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.loadPolicyType(StdMatchableTranslator.java:515) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.findPolicyType(StdMatchableTranslator.java:480) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdMatchableTranslator.convertPolicy(StdMatchableTranslator.java:241) 14:59:57 policy-xacml-pdp | at org.onap.policy.xacml.pdp.application.optimization.OptimizationPdpApplicationTranslator.convertPolicy(OptimizationPdpApplicationTranslator.java:72) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider.loadPolicy(StdXacmlApplicationServiceProvider.java:127) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdpx.main.rest.XacmlPdpApplicationManager.loadDeployedPolicy(XacmlPdpApplicationManager.java:199) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.XacmlPdpUpdatePublisher.handlePdpUpdate(XacmlPdpUpdatePublisher.java:91) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:72) 14:59:57 policy-xacml-pdp | at org.onap.policy.pdpx.main.comm.listeners.XacmlPdpUpdateListener.onTopicEvent(XacmlPdpUpdateListener.java:36) 14:59:57 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.ScoListener.onTopicEvent(ScoListener.java:75) 14:59:57 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher.onTopicEvent(MessageTypeDispatcher.java:97) 14:59:57 policy-xacml-pdp | at org.onap.policy.common.endpoints.listeners.JsonListener.onTopicEvent(JsonListener.java:61) 14:59:57 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.TopicBase.broadcast(TopicBase.java:170) 14:59:57 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.fetchAllMessages(SingleThreadedBusTopicSource.java:252) 14:59:57 policy-xacml-pdp | at org.onap.policy.common.message.bus.event.base.SingleThreadedBusTopicSource.run(SingleThreadedBusTopicSource.java:235) 14:59:57 policy-xacml-pdp | at java.base/java.lang.Thread.run(Thread.java:840) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.773+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:29.775+00:00|INFO|GsonMessageBodyHandler|KAFKA-source-policy-pdp-pap] Using GSON for REST calls 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.137+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] Successfully pulled onap.policies.optimization.resource.AffinityPolicy 1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.169+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.resource.AffinityPolicy:1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.169+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Retrieving datatype policy.data.affinityProperties_properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.169+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.optimization.Resource:1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.170+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Scanning PolicyType onap.policies.Optimization:1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.170+00:00|INFO|MatchablePolicyType|KAFKA-source-policy-pdp-pap] Found root - done scanning 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.170+00:00|INFO|StdBaseTranslator|KAFKA-source-policy-pdp-pap] Obligation Policy id: OSDF_CASABLANCA.Affinity_Default type: onap.policies.optimization.resource.AffinityPolicy weight: 0 policy: 14:59:57 policy-xacml-pdp | {"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.189+00:00|INFO|StdMatchableTranslator|KAFKA-source-policy-pdp-pap] <?xml version="1.0" encoding="UTF-8" standalone="yes"?> 14:59:57 policy-xacml-pdp | <Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="OSDF_CASABLANCA.Affinity_Default" Version="1.0.0" RuleCombiningAlgId="urn:oasis:names:tc:xacml:1.0:rule-combining-algorithm:first-applicable"> 14:59:57 policy-xacml-pdp | <Target/> 14:59:57 policy-xacml-pdp | <Rule RuleId="OSDF_CASABLANCA.Affinity_Default:rule" Effect="Permit"> 14:59:57 policy-xacml-pdp | <Description>Default is to PERMIT if the policy matches.</Description> 14:59:57 policy-xacml-pdp | <Target/> 14:59:57 policy-xacml-pdp | <Condition> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:or"> 14:59:57 policy-xacml-pdp | <Description>IF exists and is equal</Description> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:integer-equal"> 14:59:57 policy-xacml-pdp | <Description>Does the policy-type attribute exist?</Description> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-bag-size"> 14:59:57 policy-xacml-pdp | <Description>Get the size of policy-type attributes</Description> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#integer">0</AttributeValue> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-is-in"> 14:59:57 policy-xacml-pdp | <Description>Is this policy-type in the list?</Description> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.optimization.resource.AffinityPolicy</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | </Condition> 14:59:57 policy-xacml-pdp | </Rule> 14:59:57 policy-xacml-pdp | <ObligationExpressions> 14:59:57 policy-xacml-pdp | <ObligationExpression ObligationId="urn:org:onap:rest:body" FulfillOn="Permit"> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policyid" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">OSDF_CASABLANCA.Affinity_Default</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policycontent" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:weight" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#integer">0</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policytype" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.optimization.resource.AffinityPolicy</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | </ObligationExpression> 14:59:57 policy-xacml-pdp | </ObligationExpressions> 14:59:57 policy-xacml-pdp | </Policy> 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.205+00:00|INFO|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Xacml Policy is 14:59:57 policy-xacml-pdp | <?xml version="1.0" encoding="UTF-8" standalone="yes"?> 14:59:57 policy-xacml-pdp | <Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="OSDF_CASABLANCA.Affinity_Default" Version="1.0.0" RuleCombiningAlgId="urn:oasis:names:tc:xacml:1.0:rule-combining-algorithm:first-applicable"> 14:59:57 policy-xacml-pdp | <Target/> 14:59:57 policy-xacml-pdp | <Rule RuleId="OSDF_CASABLANCA.Affinity_Default:rule" Effect="Permit"> 14:59:57 policy-xacml-pdp | <Description>Default is to PERMIT if the policy matches.</Description> 14:59:57 policy-xacml-pdp | <Target/> 14:59:57 policy-xacml-pdp | <Condition> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:or"> 14:59:57 policy-xacml-pdp | <Description>IF exists and is equal</Description> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:integer-equal"> 14:59:57 policy-xacml-pdp | <Description>Does the policy-type attribute exist?</Description> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-bag-size"> 14:59:57 policy-xacml-pdp | <Description>Get the size of policy-type attributes</Description> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#integer">0</AttributeValue> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | <Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-is-in"> 14:59:57 policy-xacml-pdp | <Description>Is this policy-type in the list?</Description> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.optimization.resource.AffinityPolicy</AttributeValue> 14:59:57 policy-xacml-pdp | <AttributeDesignator Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" AttributeId="urn:org:onap:policy-type" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false"/> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | </Apply> 14:59:57 policy-xacml-pdp | </Condition> 14:59:57 policy-xacml-pdp | </Rule> 14:59:57 policy-xacml-pdp | <ObligationExpressions> 14:59:57 policy-xacml-pdp | <ObligationExpression ObligationId="urn:org:onap:rest:body" FulfillOn="Permit"> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policyid" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">OSDF_CASABLANCA.Affinity_Default</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policycontent" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">{"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:weight" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#integer">0</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | <AttributeAssignmentExpression AttributeId="urn:org:onap::obligation:policytype" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource"> 14:59:57 policy-xacml-pdp | <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">onap.policies.optimization.resource.AffinityPolicy</AttributeValue> 14:59:57 policy-xacml-pdp | </AttributeAssignmentExpression> 14:59:57 policy-xacml-pdp | </ObligationExpression> 14:59:57 policy-xacml-pdp | </ObligationExpressions> 14:59:57 policy-xacml-pdp | </Policy> 14:59:57 policy-xacml-pdp | 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.205+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | /opt/app/policy/pdpx/apps/optimization/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.205+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Loaded ToscaPolicy {policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0} into application optimization 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.206+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"onap.restart.tca","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"6a5c2c9f-6c22-44fe-904b-515d314bb708","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"800a49c0-b071-44e7-8819-4105949c61d2","timestampMs":1749826710206,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:30.236+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:35.580+00:00|INFO|RequestLog|qtp2014233765-32] 172.17.0.4 - policyadmin [13/Jun/2025:14:58:35 +0000] "GET /metrics HTTP/1.1" 200 2159 "" "Prometheus/3.4.1" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.879+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.881+00:00|WARN|RequestParser|qtp2014233765-26] Unable to extract attribute value from object: urn:org:onap:policy-type 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Initializing OnapPolicyFinderFactory Properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.882+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Loading policy file /opt/app/policy/pdpx/apps/monitoring/onap.restart.tca_1.0.0.xml 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Root Policies: 1 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-26] Referenced Policies: 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy f172ec62-5b8b-456a-9a1a-5fff266087b4 version 1.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.900+00:00|INFO|StdPolicyFinder|qtp2014233765-26] Updating policy map with policy onap.restart.tca version 1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.916+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-26] Elapsed Time: 35ms 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|INFO|StdBaseTranslator|qtp2014233765-26] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f172ec62-5b8b-456a-9a1a-5fff266087b4,version=1.0}]}]} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Obligation: urn:org:onap:rest:body 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-26] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.917+00:00|INFO|MonitoringPdpApplication|qtp2014233765-26] Abbreviating decision results DecisionResponse(status=null, message=null, advice=null, obligations=null, policies={onap.restart.tca={type=onap.policies.monitoring.tcagen2, type_version=1.0.0, properties={tca.policy={domain=measurementsForVfScaling, metricsPerEventName=[{eventName=Measurement_vGMUX, controlLoopSchemaType=VNF, policyScope=DCAE, policyName=DCAE.Config_tca-hi-lo, policyVersion=v0.0.1, thresholds=[{closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=EQUAL, severity=MAJOR, closedLoopEventStatus=ABATED}, {closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e, version=1.0.2, fieldPath=$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value, thresholdValue=0, direction=GREATER, severity=CRITICAL, closedLoopEventStatus=ONSET}]}]}}, name=onap.restart.tca, version=1.0.0, metadata={policy-id=onap.restart.tca, policy-version=1.0.0}}}, attributes=null) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.919+00:00|INFO|RequestLog|qtp2014233765-26] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?abbrev=true HTTP/1.1" 200 146 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.932+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Converting Request DecisionRequest(onapName=DCAE, onapComponent=PolicyHandler, onapInstance=622431a4-9dea-4eae-b443-3b2164639c64, requestId=null, context=null, action=configure, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={policy-id=onap.restart.tca}) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.932+00:00|WARN|RequestParser|qtp2014233765-29] Unable to extract attribute value from object: urn:org:onap:policy-type 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.933+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-29] Elapsed Time: 1ms 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.933+00:00|INFO|StdBaseTranslator|qtp2014233765-29] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.monitoring.tcagen2","type_version":"1.0.0","properties":{"tca.policy":{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}},"name":"onap.restart.tca","version":"1.0.0","metadata":{"policy-id":"onap.restart.tca","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.monitoring.tcagen2}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=DCAE}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=PolicyHandler}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=622431a4-9dea-4eae-b443-3b2164639c64}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:resource:resource-id,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.restart.tca}],includeInResults=true}]}],policyIdentifiers=[{id=onap.restart.tca,version=1.0.0}],policySetIdentifiers=[{id=f172ec62-5b8b-456a-9a1a-5fff266087b4,version=1.0}]}]} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.933+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Obligation: urn:org:onap:rest:body 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.934+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-29] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.934+00:00|INFO|MonitoringPdpApplication|qtp2014233765-29] Unsupported query param for Monitoring application: {null=[]} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.936+00:00|INFO|RequestLog|qtp2014233765-29] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1055 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Converting Request DecisionRequest(onapName=SDNC, onapComponent=SDNC-component, onapInstance=SDNC-component-instance, requestId=unique-request-sdnc-1, context=null, action=naming, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={nfRole=[], naming-type=[], property-name=[], policy-type=[onap.policies.Naming]}) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|WARN|RequestParser|qtp2014233765-30] Unable to extract attribute value from object: urn:oasis:names:tc:xacml:1.0:resource:resource-id 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Initializing OnapPolicyFinderFactory Properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.945+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.946+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Loading policy file /opt/app/policy/pdpx/apps/naming/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP_1.0.0.xml 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Root Policies: 1 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-30] Referenced Policies: 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy 393abb79-9d92-4638-bc69-1509d0a85b0d version 1.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.954+00:00|INFO|StdPolicyFinder|qtp2014233765-30] Updating policy map with policy SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP version 1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-30] Elapsed Time: 11ms 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|INFO|StdBaseTranslator|qtp2014233765-30] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.Naming","type_version":"1.0.0","properties":{"policy-instance-name":"ONAP_NF_NAMING_TIMESTAMP","naming-models":[{"naming-type":"VNF","naming-recipe":"AIC_CLOUD_REGION|DELIMITER|CONSTANT|DELIMITER|TIMESTAMP","name-operation":"to_lower_case()","naming-properties":[{"property-name":"AIC_CLOUD_REGION"},{"property-name":"CONSTANT","property-value":"onap-nf"},{"property-name":"TIMESTAMP"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VNFC","naming-recipe":"VNF_NAME|DELIMITER|NFC_NAMING_CODE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"ENTIRETY","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}},{"property-name":"NFC_NAMING_CODE"},{"property-value":"-","property-name":"DELIMITER"}]},{"naming-type":"VF-MODULE","naming-recipe":"VNF_NAME|DELIMITER|VF_MODULE_LABEL|DELIMITER|VF_MODULE_TYPE|DELIMITER|SEQUENCE","name-operation":"to_lower_case()","naming-properties":[{"property-name":"VNF_NAME"},{"property-value":"-","property-name":"DELIMITER"},{"property-name":"VF_MODULE_LABEL"},{"property-name":"VF_MODULE_TYPE"},{"property-name":"SEQUENCE","increment-sequence":{"max":"zzz","scope":"PRECEEDING","start-value":"1","length":"3","increment":"1","sequence-type":"alpha-numeric"}}]}]},"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0","metadata":{"policy-id":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=SDNC-component-instance}],includeInResults=true}]}{category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributes=[{attributeId=urn:org:onap:policy-type,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.Naming}],includeInResults=true}]}],policyIdentifiers=[{id=SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP,version=1.0.0}],policySetIdentifiers=[{id=393abb79-9d92-4638-bc69-1509d0a85b0d,version=1.0}]}]} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|INFO|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Obligation: urn:org:onap:rest:body 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.956+00:00|WARN|StdCombinedPolicyResultsTranslator|qtp2014233765-30] Advice found - not supported in this class class org.onap.policy.pdp.xacml.application.common.std.StdCombinedPolicyResultsTranslator 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.958+00:00|INFO|RequestLog|qtp2014233765-30] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 1598 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.972+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] Converting Request DecisionRequest(onapName=OOF, onapComponent=OOF-component, onapInstance=OOF-component-instance, requestId=null, context={subscriberName=[]}, action=optimize, currentDateTime=null, currentDate=null, currentTime=null, timeZone=null, resource={scope=[], services=[], resources=[], geography=[]}) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Constructed using properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, root1.file=/opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=root1, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Initializing OnapPolicyFinderFactory Properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Combining root policies with urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.975+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Loading policy file /opt/app/policy/pdpx/apps/optimization/OSDF_CASABLANCA.Affinity_Default_1.0.0.xml 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Root Policies: 1 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|OnapPolicyFinderFactory|qtp2014233765-28] Referenced Policies: 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|StdPolicyFinder|qtp2014233765-28] Updating policy map with policy 2b759476-56ab-447d-ad39-2356793ff05b version 1.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.982+00:00|INFO|StdPolicyFinder|qtp2014233765-28] Updating policy map with policy OSDF_CASABLANCA.Affinity_Default version 1.0.0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.983+00:00|INFO|StdXacmlApplicationServiceProvider|qtp2014233765-28] Elapsed Time: 9ms 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.983+00:00|INFO|StdBaseTranslator|qtp2014233765-28] Converting Response {results=[{decision=Permit,status={statusCode={statusCodeValue=urn:oasis:names:tc:xacml:1.0:status:ok}},obligations=[{id=urn:org:onap:rest:body,attributeAssignments=[{attributeId=urn:org:onap::obligation:policyid,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OSDF_CASABLANCA.Affinity_Default}}{attributeId=urn:org:onap::obligation:policycontent,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value={"type":"onap.policies.optimization.resource.AffinityPolicy","type_version":"1.0.0","properties":{"geography":[],"identity":"affinity_vCPE","scope":[],"affinityProperties":{"qualifier":"same","category":"complex"},"resources":[],"services":[],"applicableResources":"any"},"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0","metadata":{"policy-id":"OSDF_CASABLANCA.Affinity_Default","policy-version":"1.0.0"}}}}{attributeId=urn:org:onap::obligation:weight,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#integer,value=0}}{attributeId=urn:org:onap::obligation:policytype,category=urn:oasis:names:tc:xacml:3.0:attribute-category:resource,attributeValue={dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=onap.policies.optimization.resource.AffinityPolicy}}]}],attributeCategories=[{category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,attributes=[{attributeId=urn:oasis:names:tc:xacml:1.0:subject:subject-id,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF}],includeInResults=true}{attributeId=urn:org:onap:onap-component,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component}],includeInResults=true}{attributeId=urn:org:onap:onap-instance,category=urn:oasis:names:tc:xacml:1.0:subject-category:access-subject,values=[{dataTypeId=http://www.w3.org/2001/XMLSchema#string,value=OOF-component-instance}],includeInResults=true}]}],policyIdentifiers=[{id=OSDF_CASABLANCA.Affinity_Default,version=1.0.0}],policySetIdentifiers=[{id=2b759476-56ab-447d-ad39-2356793ff05b,version=1.0}]}]} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.984+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] Obligation: urn:org:onap:rest:body 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.984+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] New entry onap.policies.optimization.resource.AffinityPolicy weight 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.984+00:00|INFO|StdMatchableTranslator|qtp2014233765-28] Policy (OSDF_CASABLANCA.Affinity_Default,{type=onap.policies.optimization.resource.AffinityPolicy, type_version=1.0.0, properties={geography=[], identity=affinity_vCPE, scope=[], affinityProperties={qualifier=same, category=complex}, resources=[], services=[], applicableResources=any}, name=OSDF_CASABLANCA.Affinity_Default, version=1.0.0, metadata={policy-id=OSDF_CASABLANCA.Affinity_Default, policy-version=1.0.0}}) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:53.986+00:00|INFO|RequestLog|qtp2014233765-28] 172.17.0.6 - policyadmin [13/Jun/2025:14:58:53 +0000] "POST /policy/pdpx/v1/decision?null HTTP/1.1" 200 467 "" "python-requests/2.32.4" 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.373+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"source":"pap-8d981b63-9064-4d54-8468-b1eb1f91dc26","description":"The default group that registers all supported policy types and pdps.","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"onap.restart.tca","version":"1.0.0"}],"messageName":"PDP_UPDATE","requestId":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","timestampMs":1749826734338,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|INFO|XacmlPdpUpdateListener|KAFKA-source-policy-pdp-pap] PDP update message has been received from the PAP - PdpUpdate(super=PdpMessage(messageName=PDP_UPDATE, requestId=cb526d69-01bf-4ec2-b43b-e5796b06e4c5, timestampMs=1749826734338, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), source=pap-8d981b63-9064-4d54-8468-b1eb1f91dc26, description=The default group that registers all supported policy types and pdps., pdpHeartbeatIntervalMs=null, policiesToBeDeployed=[], policiesToBeUndeployed=[onap.restart.tca 1.0.0]) 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 1 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.374+00:00|ERROR|StdXacmlApplicationServiceProvider|KAFKA-source-policy-pdp-pap] Failed to find ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} in our map size 0 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.375+00:00|INFO|XacmlPolicyUtils|KAFKA-source-policy-pdp-pap] Storing xacml properties {xacml.att.evaluationContextFactory=com.att.research.xacmlatt.pdp.std.StdEvaluationContextFactory, xacml.pepEngineFactory=com.att.research.xacml.std.pep.StdEngineFactory, xacml.dataTypeFactory=com.att.research.xacml.std.StdDataTypeFactory, xacml.att.policyFinderFactory.combineRootPolicies=urn:com:att:xacml:3.0:policy-combining-algorithm:combined-permit-overrides, xacml.att.policyFinderFactory=org.onap.policy.pdp.xacml.application.common.OnapPolicyFinderFactory, xacml.att.functionDefinitionFactory=com.att.research.xacmlatt.pdp.std.StdFunctionDefinitionFactory, xacml.rootPolicies=, xacml.att.combiningAlgorithmFactory=com.att.research.xacmlatt.pdp.std.StdCombiningAlgorithmFactory, xacml.referencedPolicies=, xacml.pdpEngineFactory=com.att.research.xacmlatt.pdp.ATTPDPEngineFactory, xacml.traceEngineFactory=com.att.research.xacml.std.trace.LoggingTraceEngineFactory, xacml.pipFinderFactory=com.att.research.xacml.std.pip.StdPIPFinderFactory} 14:59:57 policy-xacml-pdp | /opt/app/policy/pdpx/apps/monitoring/xacml.properties 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.376+00:00|INFO|XacmlPdpApplicationManager|KAFKA-source-policy-pdp-pap] Unloaded ToscaPolicy {policy-id=onap.restart.tca, policy-version=1.0.0} from application monitoring 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.376+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.383+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"response":{"responseTo":"cb526d69-01bf-4ec2-b43b-e5796b06e4c5","responseStatus":"SUCCESS"},"messageName":"PDP_STATUS","requestId":"fe9b2d1d-c5b0-4dd8-9c19-c42c7ad985ee","timestampMs":1749826734376,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:58:54.383+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:59:20.051+00:00|INFO|XacmlPdpHearbeatPublisher|pool-2-thread-1] Sending Xacml PDP heartbeat to the PAP - PdpStatus(super=PdpMessage(messageName=PDP_STATUS, requestId=eefa5f0e-984c-486a-a008-71aa56b4235b, timestampMs=1749826760051, name=xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e, pdpGroup=defaultGroup, pdpSubgroup=xacml), pdpType=xacml, state=ACTIVE, healthy=HEALTHY, description=null, policies=[SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP 1.0.0, OSDF_CASABLANCA.Affinity_Default 1.0.0], deploymentInstanceInfo=null, properties=null, response=null) 14:59:57 policy-xacml-pdp | [2025-06-13T14:59:20.052+00:00|INFO|network|pool-2-thread-1] [OUT|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:59:20.061+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:59:57 policy-xacml-pdp | {"pdpType":"xacml","state":"ACTIVE","healthy":"HEALTHY","policies":[{"name":"SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP","version":"1.0.0"},{"name":"OSDF_CASABLANCA.Affinity_Default","version":"1.0.0"}],"messageName":"PDP_STATUS","requestId":"eefa5f0e-984c-486a-a008-71aa56b4235b","timestampMs":1749826760051,"name":"xacml-7f9dba5f-b421-4952-8db6-b7ad2a7a947e","pdpGroup":"defaultGroup","pdpSubgroup":"xacml"} 14:59:57 policy-xacml-pdp | [2025-06-13T14:59:20.062+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:59:57 policy-xacml-pdp | [2025-06-13T14:59:35.580+00:00|INFO|RequestLog|qtp2014233765-31] 172.17.0.4 - policyadmin [13/Jun/2025:14:59:35 +0000] "GET /metrics HTTP/1.1" 200 2211 "" "Prometheus/3.4.1" 14:59:58 postgres | The files belonging to this database system will be owned by user "postgres". 14:59:58 postgres | This user must also own the server process. 14:59:58 postgres | 14:59:58 postgres | The database cluster will be initialized with locale "en_US.utf8". 14:59:58 postgres | The default database encoding has accordingly been set to "UTF8". 14:59:58 postgres | The default text search configuration will be set to "english". 14:59:58 postgres | 14:59:58 postgres | Data page checksums are disabled. 14:59:58 postgres | 14:59:58 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 14:59:58 postgres | creating subdirectories ... ok 14:59:58 postgres | selecting dynamic shared memory implementation ... posix 14:59:58 postgres | selecting default max_connections ... 100 14:59:58 postgres | selecting default shared_buffers ... 128MB 14:59:58 postgres | selecting default time zone ... Etc/UTC 14:59:58 postgres | creating configuration files ... ok 14:59:58 postgres | running bootstrap script ... ok 14:59:58 postgres | performing post-bootstrap initialization ... ok 14:59:58 postgres | syncing data to disk ... ok 14:59:58 postgres | 14:59:58 postgres | 14:59:58 postgres | Success. You can now start the database server using: 14:59:58 postgres | 14:59:58 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 14:59:58 postgres | 14:59:58 postgres | initdb: warning: enabling "trust" authentication for local connections 14:59:58 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 14:59:58 postgres | waiting for server to start....2025-06-13 14:56:35.855 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 14:59:58 postgres | 2025-06-13 14:56:35.857 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 14:59:58 postgres | 2025-06-13 14:56:35.864 UTC [50] LOG: database system was shut down at 2025-06-13 14:56:35 UTC 14:59:58 postgres | 2025-06-13 14:56:35.870 UTC [47] LOG: database system is ready to accept connections 14:59:58 postgres | done 14:59:58 postgres | server started 14:59:58 postgres | 14:59:58 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 14:59:58 postgres | 14:59:58 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 14:59:58 postgres | #!/bin/bash -xv 14:59:58 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 14:59:58 postgres | # 14:59:58 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 14:59:58 postgres | # you may not use this file except in compliance with the License. 14:59:58 postgres | # You may obtain a copy of the License at 14:59:58 postgres | # 14:59:58 postgres | # http://www.apache.org/licenses/LICENSE-2.0 14:59:58 postgres | # 14:59:58 postgres | # Unless required by applicable law or agreed to in writing, software 14:59:58 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 14:59:58 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14:59:58 postgres | # See the License for the specific language governing permissions and 14:59:58 postgres | # limitations under the License. 14:59:58 postgres | 14:59:58 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 14:59:58 postgres | CREATE ROLE 14:59:58 postgres | 14:59:58 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | do 14:59:58 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 14:59:58 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 14:59:58 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 14:59:58 postgres | done 14:59:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 14:59:58 postgres | CREATE DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 14:59:58 postgres | ALTER DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 14:59:58 postgres | GRANT 14:59:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 14:59:58 postgres | CREATE DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 14:59:58 postgres | ALTER DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 14:59:58 postgres | GRANT 14:59:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 14:59:58 postgres | CREATE DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 14:59:58 postgres | ALTER DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 14:59:58 postgres | GRANT 14:59:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 14:59:58 postgres | CREATE DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 14:59:58 postgres | ALTER DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 14:59:58 postgres | GRANT 14:59:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 14:59:58 postgres | CREATE DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 14:59:58 postgres | ALTER DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 14:59:58 postgres | GRANT 14:59:58 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 14:59:58 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 14:59:58 postgres | CREATE DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 14:59:58 postgres | ALTER DATABASE 14:59:58 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 14:59:58 postgres | GRANT 14:59:58 postgres | 14:59:58 postgres | 2025-06-13 14:56:37.379 UTC [47] LOG: received fast shutdown request 14:59:58 postgres | waiting for server to shut down....2025-06-13 14:56:37.382 UTC [47] LOG: aborting any active transactions 14:59:58 postgres | 2025-06-13 14:56:37.384 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 14:59:58 postgres | 2025-06-13 14:56:37.386 UTC [48] LOG: shutting down 14:59:58 postgres | 2025-06-13 14:56:37.388 UTC [48] LOG: checkpoint starting: shutdown immediate 14:59:58 postgres | 2025-06-13 14:56:37.963 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.401 s, sync=0.164 s, total=0.578 s; sync files=1788, longest=0.038 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 14:59:58 postgres | 2025-06-13 14:56:37.974 UTC [47] LOG: database system is shut down 14:59:58 postgres | done 14:59:58 postgres | server stopped 14:59:58 postgres | 14:59:58 postgres | PostgreSQL init process complete; ready for start up. 14:59:58 postgres | 14:59:58 postgres | 2025-06-13 14:56:38.011 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 14:59:58 postgres | 2025-06-13 14:56:38.011 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 14:59:58 postgres | 2025-06-13 14:56:38.011 UTC [1] LOG: listening on IPv6 address "::", port 5432 14:59:58 postgres | 2025-06-13 14:56:38.017 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 14:59:58 postgres | 2025-06-13 14:56:38.026 UTC [100] LOG: database system was shut down at 2025-06-13 14:56:37 UTC 14:59:58 postgres | 2025-06-13 14:56:38.030 UTC [1] LOG: database system is ready to accept connections 14:59:58 prometheus | time=2025-06-13T14:56:37.008Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 14:59:58 prometheus | time=2025-06-13T14:56:37.008Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 14:59:58 prometheus | time=2025-06-13T14:56:37.008Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 14:59:58 prometheus | time=2025-06-13T14:56:37.010Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 14:59:58 prometheus | time=2025-06-13T14:56:37.012Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 14:59:58 prometheus | time=2025-06-13T14:56:37.013Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 14:59:58 prometheus | time=2025-06-13T14:56:37.014Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 14:59:58 prometheus | time=2025-06-13T14:56:37.014Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 14:59:58 prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 14:59:58 prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.21µs 14:59:58 prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 14:59:58 prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=229.954µs 14:59:58 prometheus | time=2025-06-13T14:56:37.018Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=84.231µs wal_replay_duration=257.024µs wbl_replay_duration=180ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.21µs total_replay_duration=449.116µs 14:59:58 prometheus | time=2025-06-13T14:56:37.024Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 14:59:58 prometheus | time=2025-06-13T14:56:37.024Z level=INFO source=main.go:1290 msg="TSDB started" 14:59:58 prometheus | time=2025-06-13T14:56:37.025Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 14:59:58 prometheus | time=2025-06-13T14:56:37.026Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 14:59:58 prometheus | time=2025-06-13T14:56:37.026Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.94µs remote_storage=4.69µs web_handler=600ns query_engine=1.37µs scrape=246.754µs scrape_sd=163.602µs notify=120.822µs notify_sd=19.72µs rules=1.62µs tracing=3.83µs filename=/etc/prometheus/prometheus.yml totalDuration=1.880837ms 14:59:58 prometheus | time=2025-06-13T14:56:37.026Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 14:59:58 prometheus | time=2025-06-13T14:56:37.027Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 14:59:58 zookeeper | ===> User 14:59:58 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:59:58 zookeeper | ===> Configuring ... 14:59:58 zookeeper | ===> Running preflight checks ... 14:59:58 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 14:59:58 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 14:59:58 zookeeper | ===> Launching ... 14:59:58 zookeeper | ===> Launching zookeeper ... 14:59:58 zookeeper | [2025-06-13 14:56:36,784] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,786] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,786] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,786] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,786] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,788] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 14:59:58 zookeeper | [2025-06-13 14:56:36,788] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 14:59:58 zookeeper | [2025-06-13 14:56:36,788] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 14:59:58 zookeeper | [2025-06-13 14:56:36,788] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 14:59:58 zookeeper | [2025-06-13 14:56:36,789] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 14:59:58 zookeeper | [2025-06-13 14:56:36,790] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,790] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,790] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,790] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,790] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:59:58 zookeeper | [2025-06-13 14:56:36,791] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 14:59:58 zookeeper | [2025-06-13 14:56:36,801] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 14:59:58 zookeeper | [2025-06-13 14:56:36,804] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:59:58 zookeeper | [2025-06-13 14:56:36,804] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:59:58 zookeeper | [2025-06-13 14:56:36,806] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:59:58 zookeeper | [2025-06-13 14:56:36,814] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,814] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,814] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,815] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,816] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,817] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,818] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 14:59:58 zookeeper | [2025-06-13 14:56:36,819] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,819] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,820] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:59:58 zookeeper | [2025-06-13 14:56:36,820] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:59:58 zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:59:58 zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:59:58 zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:59:58 zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:59:58 zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:59:58 zookeeper | [2025-06-13 14:56:36,821] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:59:58 zookeeper | [2025-06-13 14:56:36,823] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,823] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,824] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 14:59:58 zookeeper | [2025-06-13 14:56:36,824] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 14:59:58 zookeeper | [2025-06-13 14:56:36,824] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,845] INFO Logging initialized @375ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 14:59:58 zookeeper | [2025-06-13 14:56:36,899] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 14:59:58 zookeeper | [2025-06-13 14:56:36,899] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 14:59:58 zookeeper | [2025-06-13 14:56:36,914] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 14:59:58 zookeeper | [2025-06-13 14:56:36,946] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 14:59:58 zookeeper | [2025-06-13 14:56:36,946] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 14:59:58 zookeeper | [2025-06-13 14:56:36,948] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 14:59:58 zookeeper | [2025-06-13 14:56:36,952] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 14:59:58 zookeeper | [2025-06-13 14:56:36,963] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 14:59:58 zookeeper | [2025-06-13 14:56:36,972] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 14:59:58 zookeeper | [2025-06-13 14:56:36,972] INFO Started @507ms (org.eclipse.jetty.server.Server) 14:59:58 zookeeper | [2025-06-13 14:56:36,973] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 14:59:58 zookeeper | [2025-06-13 14:56:36,978] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 14:59:58 zookeeper | [2025-06-13 14:56:36,979] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 14:59:58 zookeeper | [2025-06-13 14:56:36,980] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:59:58 zookeeper | [2025-06-13 14:56:36,982] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:59:58 zookeeper | [2025-06-13 14:56:36,996] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:59:58 zookeeper | [2025-06-13 14:56:36,996] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:59:58 zookeeper | [2025-06-13 14:56:36,996] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 14:59:58 zookeeper | [2025-06-13 14:56:36,996] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 14:59:58 zookeeper | [2025-06-13 14:56:37,002] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 14:59:58 zookeeper | [2025-06-13 14:56:37,002] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:59:58 zookeeper | [2025-06-13 14:56:37,004] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 14:59:58 zookeeper | [2025-06-13 14:56:37,005] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:59:58 zookeeper | [2025-06-13 14:56:37,005] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:59:58 zookeeper | [2025-06-13 14:56:37,013] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 14:59:58 zookeeper | [2025-06-13 14:56:37,013] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 14:59:58 zookeeper | [2025-06-13 14:56:37,029] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 14:59:58 zookeeper | [2025-06-13 14:56:37,030] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 14:59:58 zookeeper | [2025-06-13 14:56:43,571] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 14:59:58 Tearing down containers... 14:59:58 Container policy-csit Stopping 14:59:58 Container grafana Stopping 14:59:58 Container policy-xacml-pdp Stopping 14:59:58 Container policy-csit Stopped 14:59:58 Container policy-csit Removing 14:59:58 Container policy-csit Removed 14:59:58 Container grafana Stopped 14:59:58 Container grafana Removing 14:59:58 Container grafana Removed 14:59:58 Container prometheus Stopping 14:59:59 Container prometheus Stopped 14:59:59 Container prometheus Removing 14:59:59 Container prometheus Removed 15:00:08 Container policy-xacml-pdp Stopped 15:00:08 Container policy-xacml-pdp Removing 15:00:08 Container policy-xacml-pdp Removed 15:00:08 Container policy-pap Stopping 15:00:19 Container policy-pap Stopped 15:00:19 Container policy-pap Removing 15:00:19 Container policy-pap Removed 15:00:19 Container policy-api Stopping 15:00:19 Container kafka Stopping 15:00:20 Container kafka Stopped 15:00:20 Container kafka Removing 15:00:20 Container kafka Removed 15:00:20 Container zookeeper Stopping 15:00:20 Container zookeeper Stopped 15:00:20 Container zookeeper Removing 15:00:20 Container zookeeper Removed 15:00:29 Container policy-api Stopped 15:00:29 Container policy-api Removing 15:00:29 Container policy-api Removed 15:00:29 Container policy-db-migrator Stopping 15:00:29 Container policy-db-migrator Stopped 15:00:29 Container policy-db-migrator Removing 15:00:29 Container policy-db-migrator Removed 15:00:29 Container postgres Stopping 15:00:29 Container postgres Stopped 15:00:29 Container postgres Removing 15:00:29 Container postgres Removed 15:00:29 Network compose_default Removing 15:00:29 Network compose_default Removed 15:00:29 $ ssh-agent -k 15:00:29 unset SSH_AUTH_SOCK; 15:00:29 unset SSH_AGENT_PID; 15:00:29 echo Agent pid 2051 killed; 15:00:30 [ssh-agent] Stopped. 15:00:30 Robot results publisher started... 15:00:30 INFO: Checking test criticality is deprecated and will be dropped in a future release! 15:00:30 -Parsing output xml: 15:00:30 Done! 15:00:30 -Copying log files to build dir: 15:00:30 Done! 15:00:30 -Assigning results to build: 15:00:30 Done! 15:00:30 -Checking thresholds: 15:00:30 Done! 15:00:30 Done publishing Robot results. 15:00:30 [PostBuildScript] - [INFO] Executing post build scripts. 15:00:30 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins15209471800498681819.sh 15:00:30 ---> sysstat.sh 15:00:31 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins8640850003505794288.sh 15:00:31 ---> package-listing.sh 15:00:31 ++ facter osfamily 15:00:31 ++ tr '[:upper:]' '[:lower:]' 15:00:31 + OS_FAMILY=debian 15:00:31 + workspace=/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp 15:00:31 + START_PACKAGES=/tmp/packages_start.txt 15:00:31 + END_PACKAGES=/tmp/packages_end.txt 15:00:31 + DIFF_PACKAGES=/tmp/packages_diff.txt 15:00:31 + PACKAGES=/tmp/packages_start.txt 15:00:31 + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' 15:00:31 + PACKAGES=/tmp/packages_end.txt 15:00:31 + case "${OS_FAMILY}" in 15:00:31 + dpkg -l 15:00:31 + grep '^ii' 15:00:31 + '[' -f /tmp/packages_start.txt ']' 15:00:31 + '[' -f /tmp/packages_end.txt ']' 15:00:31 + diff /tmp/packages_start.txt /tmp/packages_end.txt 15:00:31 + '[' /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp ']' 15:00:31 + mkdir -p /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ 15:00:31 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/archives/ 15:00:31 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins5019103323956257828.sh 15:00:31 ---> capture-instance-metadata.sh 15:00:31 Setup pyenv: 15:00:31 system 15:00:31 3.8.13 15:00:31 3.9.13 15:00:31 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) 15:00:31 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv 15:00:33 lf-activate-venv(): INFO: Installing: lftools 15:00:41 lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH 15:00:41 INFO: Running in OpenStack, capturing instance metadata 15:00:42 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins323986876715251989.sh 15:00:42 provisioning config files... 15:00:42 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp@tmp/config17668754252716631788tmp 15:00:42 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 15:00:42 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 15:00:42 [EnvInject] - Injecting environment variables from a build step. 15:00:42 [EnvInject] - Injecting as environment variables the properties content 15:00:42 SERVER_ID=logs 15:00:42 15:00:42 [EnvInject] - Variables injected successfully. 15:00:42 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins4399119877412921294.sh 15:00:42 ---> create-netrc.sh 15:00:42 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins6315400656309036051.sh 15:00:42 ---> python-tools-install.sh 15:00:42 Setup pyenv: 15:00:42 system 15:00:42 3.8.13 15:00:42 3.9.13 15:00:42 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) 15:00:42 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv 15:00:44 lf-activate-venv(): INFO: Installing: lftools 15:00:52 lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH 15:00:52 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins5286077448809273713.sh 15:00:52 ---> sudo-logs.sh 15:00:52 Archiving 'sudo' log.. 15:00:52 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash /tmp/jenkins13351093357169172437.sh 15:00:52 ---> job-cost.sh 15:00:52 Setup pyenv: 15:00:52 system 15:00:52 3.8.13 15:00:52 3.9.13 15:00:52 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) 15:00:52 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv 15:00:54 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 15:00:59 lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH 15:00:59 INFO: No Stack... 15:00:59 INFO: Retrieving Pricing Info for: v3-standard-8 15:00:59 INFO: Archiving Costs 15:00:59 [policy-xacml-pdp-master-project-csit-verify-xacml-pdp] $ /bin/bash -l /tmp/jenkins2930700617725488237.sh 15:00:59 ---> logs-deploy.sh 15:00:59 Setup pyenv: 15:00:59 system 15:00:59 3.8.13 15:00:59 3.9.13 15:00:59 * 3.10.6 (set by /w/workspace/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/.python-version) 15:01:00 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-qCrN from file:/tmp/.os_lf_venv 15:01:01 lf-activate-venv(): INFO: Installing: lftools 15:01:10 lf-activate-venv(): INFO: Adding /tmp/venv-qCrN/bin to PATH 15:01:10 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-xacml-pdp-master-project-csit-verify-xacml-pdp/816 15:01:10 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 15:01:11 Archives upload complete. 15:01:11 INFO: archiving logs to Nexus 15:01:12 ---> uname -a: 15:01:12 Linux prd-ubuntu1804-docker-8c-8g-20904 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 15:01:12 15:01:12 15:01:12 ---> lscpu: 15:01:12 Architecture: x86_64 15:01:12 CPU op-mode(s): 32-bit, 64-bit 15:01:12 Byte Order: Little Endian 15:01:12 CPU(s): 8 15:01:12 On-line CPU(s) list: 0-7 15:01:12 Thread(s) per core: 1 15:01:12 Core(s) per socket: 1 15:01:12 Socket(s): 8 15:01:12 NUMA node(s): 1 15:01:12 Vendor ID: AuthenticAMD 15:01:12 CPU family: 23 15:01:12 Model: 49 15:01:12 Model name: AMD EPYC-Rome Processor 15:01:12 Stepping: 0 15:01:12 CPU MHz: 2799.998 15:01:12 BogoMIPS: 5599.99 15:01:12 Virtualization: AMD-V 15:01:12 Hypervisor vendor: KVM 15:01:12 Virtualization type: full 15:01:12 L1d cache: 32K 15:01:12 L1i cache: 32K 15:01:12 L2 cache: 512K 15:01:12 L3 cache: 16384K 15:01:12 NUMA node0 CPU(s): 0-7 15:01:12 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 15:01:12 15:01:12 15:01:12 ---> nproc: 15:01:12 8 15:01:12 15:01:12 15:01:12 ---> df -h: 15:01:12 Filesystem Size Used Avail Use% Mounted on 15:01:12 udev 16G 0 16G 0% /dev 15:01:12 tmpfs 3.2G 708K 3.2G 1% /run 15:01:12 /dev/vda1 155G 15G 141G 10% / 15:01:12 tmpfs 16G 0 16G 0% /dev/shm 15:01:12 tmpfs 5.0M 0 5.0M 0% /run/lock 15:01:12 tmpfs 16G 0 16G 0% /sys/fs/cgroup 15:01:12 /dev/vda15 105M 4.4M 100M 5% /boot/efi 15:01:12 tmpfs 3.2G 0 3.2G 0% /run/user/1001 15:01:12 15:01:12 15:01:12 ---> free -m: 15:01:12 total used free shared buff/cache available 15:01:12 Mem: 32167 878 24291 0 6997 30833 15:01:12 Swap: 1023 0 1023 15:01:12 15:01:12 15:01:12 ---> ip addr: 15:01:12 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 15:01:12 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 15:01:12 inet 127.0.0.1/8 scope host lo 15:01:12 valid_lft forever preferred_lft forever 15:01:12 inet6 ::1/128 scope host 15:01:12 valid_lft forever preferred_lft forever 15:01:12 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1458 qdisc mq state UP group default qlen 1000 15:01:12 link/ether fa:16:3e:53:53:2a brd ff:ff:ff:ff:ff:ff 15:01:12 inet 10.30.107.73/23 brd 10.30.107.255 scope global dynamic ens3 15:01:12 valid_lft 85985sec preferred_lft 85985sec 15:01:12 inet6 fe80::f816:3eff:fe53:532a/64 scope link 15:01:12 valid_lft forever preferred_lft forever 15:01:12 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 15:01:12 link/ether 02:42:59:2e:56:b8 brd ff:ff:ff:ff:ff:ff 15:01:12 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 15:01:12 valid_lft forever preferred_lft forever 15:01:12 inet6 fe80::42:59ff:fe2e:56b8/64 scope link 15:01:12 valid_lft forever preferred_lft forever 15:01:12 15:01:12 15:01:12 ---> sar -b -r -n DEV: 15:01:12 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20904) 06/13/25 _x86_64_ (8 CPU) 15:01:12 15:01:12 14:54:20 LINUX RESTART (8 CPU) 15:01:12 15:01:12 14:55:01 tps rtps wtps bread/s bwrtn/s 15:01:12 14:56:01 171.27 22.80 148.48 2287.22 75249.86 15:01:12 14:57:01 691.22 4.55 686.67 472.99 235366.24 15:01:12 14:58:01 143.38 0.22 143.16 32.66 19778.30 15:01:12 14:59:01 95.93 0.23 95.70 15.60 18054.72 15:01:12 15:00:01 15.18 0.05 15.13 10.80 321.28 15:01:12 15:01:01 70.04 1.80 68.24 93.97 2456.11 15:01:12 Average: 197.83 4.94 192.89 485.53 58536.20 15:01:12 15:01:12 14:55:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:01:12 14:56:01 28616684 31565372 4322536 13.12 83180 3150268 2494316 7.34 1037832 2930684 1324056 15:01:12 14:57:01 24079988 30397324 8859232 26.90 158484 6266296 7057264 20.76 2421204 5816792 908 15:01:12 14:58:01 22814544 29660452 10124676 30.74 181100 6728568 8251508 24.28 3270380 6167192 29992 15:01:12 14:59:01 22604888 29557656 10334332 31.37 196020 6809912 8385752 24.67 3416276 6219948 596 15:01:12 15:00:01 22812732 29723028 10126488 30.74 196244 6772556 7442444 21.90 3268588 6173764 116 15:01:12 15:01:01 24897560 31597092 8041660 24.41 198020 6549212 1629636 4.79 1444896 5975372 11188 15:01:12 Average: 24304399 30416821 8634821 26.21 168841 6046135 5876820 17.29 2476529 5547292 227809 15:01:12 15:01:12 14:55:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 15:01:12 14:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:12 14:56:01 ens3 764.51 505.37 15214.25 46.63 0.00 0.00 0.00 0.00 15:01:12 14:56:01 lo 11.63 11.63 1.10 1.10 0.00 0.00 0.00 0.00 15:01:12 14:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:12 14:57:01 veth8925360 0.00 0.23 0.00 0.01 0.00 0.00 0.00 0.00 15:01:12 14:57:01 br-1f72558afd59 36.41 44.99 2.29 317.54 0.00 0.00 0.00 0.00 15:01:12 14:57:01 vetha8ec03b 1.52 1.52 0.16 0.16 0.00 0.00 0.00 0.00 15:01:12 14:58:01 docker0 82.57 104.08 4.47 1053.84 0.00 0.00 0.00 0.00 15:01:12 14:58:01 veth81cc51b 82.57 104.20 5.60 1053.85 0.00 0.00 0.00 0.09 15:01:12 14:58:01 veth8925360 0.45 0.50 0.05 1.00 0.00 0.00 0.00 0.00 15:01:12 14:58:01 br-1f72558afd59 0.48 0.42 0.03 0.03 0.00 0.00 0.00 0.00 15:01:12 14:59:01 docker0 39.84 57.66 3.46 292.92 0.00 0.00 0.00 0.00 15:01:12 14:59:01 veth8925360 0.52 0.63 0.05 1.27 0.00 0.00 0.00 0.00 15:01:12 14:59:01 br-1f72558afd59 0.45 0.15 0.02 0.01 0.00 0.00 0.00 0.00 15:01:12 14:59:01 vetha8ec03b 34.94 27.48 3.79 4.06 0.00 0.00 0.00 0.00 15:01:12 15:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:12 15:00:01 br-1f72558afd59 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:12 15:00:01 vetha8ec03b 14.20 9.67 1.11 1.41 0.00 0.00 0.00 0.00 15:01:12 15:00:01 vethff37da4 12.73 17.06 2.21 1.70 0.00 0.00 0.00 0.00 15:01:12 15:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:01:12 15:01:01 ens3 1965.93 1252.45 36531.61 189.95 0.00 0.00 0.00 0.00 15:01:12 15:01:01 lo 27.02 27.02 2.43 2.43 0.00 0.00 0.00 0.00 15:01:12 Average: docker0 20.40 26.96 1.32 224.45 0.00 0.00 0.00 0.00 15:01:12 Average: ens3 272.08 171.29 5947.31 20.20 0.00 0.00 0.00 0.00 15:01:12 Average: lo 3.84 3.84 0.35 0.35 0.00 0.00 0.00 0.00 15:01:12 15:01:12 15:01:12 ---> sar -P ALL: 15:01:12 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20904) 06/13/25 _x86_64_ (8 CPU) 15:01:12 15:01:12 14:54:20 LINUX RESTART (8 CPU) 15:01:12 15:01:12 14:55:01 CPU %user %nice %system %iowait %steal %idle 15:01:12 14:56:01 all 10.39 0.00 1.54 1.81 0.04 86.21 15:01:12 14:56:01 0 4.33 0.00 1.34 0.28 0.02 94.03 15:01:12 14:56:01 1 8.01 0.00 1.39 0.84 0.03 89.74 15:01:12 14:56:01 2 3.31 0.00 1.20 3.56 0.07 91.86 15:01:12 14:56:01 3 26.81 0.00 2.39 1.27 0.08 69.44 15:01:12 14:56:01 4 6.01 0.00 1.22 0.17 0.02 92.59 15:01:12 14:56:01 5 8.52 0.00 1.56 1.54 0.05 88.33 15:01:12 14:56:01 6 24.15 0.00 2.33 0.45 0.03 73.03 15:01:12 14:56:01 7 2.04 0.00 0.95 6.35 0.02 90.64 15:01:12 14:57:01 all 24.62 0.00 7.41 8.25 0.10 59.61 15:01:12 14:57:01 0 31.29 0.00 7.51 3.27 0.10 57.83 15:01:12 14:57:01 1 25.08 0.00 7.77 0.85 0.08 66.21 15:01:12 14:57:01 2 19.10 0.00 7.95 4.93 0.15 67.87 15:01:12 14:57:01 3 21.14 0.00 6.70 21.20 0.08 50.87 15:01:12 14:57:01 4 27.14 0.00 6.12 2.81 0.09 63.85 15:01:12 14:57:01 5 25.87 0.00 6.84 2.42 0.08 64.79 15:01:12 14:57:01 6 20.62 0.00 9.00 17.94 0.10 52.34 15:01:12 14:57:01 7 26.77 0.00 7.41 12.57 0.10 53.15 15:01:12 14:58:01 all 17.85 0.00 2.15 0.65 0.08 79.27 15:01:12 14:58:01 0 21.28 0.00 2.05 0.02 0.07 76.58 15:01:12 14:58:01 1 21.73 0.00 2.54 0.37 0.08 75.27 15:01:12 14:58:01 2 13.59 0.00 1.81 0.07 0.08 84.46 15:01:12 14:58:01 3 14.05 0.00 2.17 1.48 0.10 82.20 15:01:12 14:58:01 4 13.67 0.00 1.63 0.62 0.08 84.00 15:01:12 14:58:01 5 25.05 0.00 2.26 0.89 0.07 71.73 15:01:12 14:58:01 6 15.53 0.00 2.83 1.71 0.08 79.85 15:01:12 14:58:01 7 17.89 0.00 1.87 0.03 0.07 80.14 15:01:12 14:59:01 all 9.06 0.00 1.79 0.50 0.06 88.59 15:01:12 14:59:01 0 8.93 0.00 1.54 0.07 0.07 89.40 15:01:12 14:59:01 1 6.08 0.00 2.12 0.08 0.05 91.66 15:01:12 14:59:01 2 8.66 0.00 1.58 0.07 0.07 89.62 15:01:12 14:59:01 3 10.37 0.00 2.15 0.05 0.07 87.36 15:01:12 14:59:01 4 7.92 0.00 1.86 1.26 0.08 88.88 15:01:12 14:59:01 5 13.07 0.00 2.11 2.33 0.05 82.45 15:01:12 14:59:01 6 10.37 0.00 1.76 0.08 0.08 87.71 15:01:12 14:59:01 7 7.10 0.00 1.27 0.03 0.05 91.55 15:01:12 15:00:01 all 1.89 0.00 0.50 0.03 0.04 97.54 15:01:12 15:00:01 0 1.03 0.00 0.53 0.03 0.03 98.36 15:01:12 15:00:01 1 1.72 0.00 0.67 0.02 0.03 97.56 15:01:12 15:00:01 2 2.32 0.00 0.63 0.02 0.05 96.98 15:01:12 15:00:01 3 3.04 0.00 0.38 0.00 0.05 96.53 15:01:12 15:00:01 4 1.64 0.00 0.43 0.10 0.05 97.78 15:01:12 15:00:01 5 2.15 0.00 0.48 0.02 0.07 97.28 15:01:12 15:00:01 6 1.90 0.00 0.38 0.02 0.05 97.65 15:01:12 15:00:01 7 1.33 0.00 0.42 0.05 0.02 98.18 15:01:12 15:01:01 all 6.23 0.00 0.68 0.18 0.03 92.88 15:01:12 15:01:01 0 3.57 0.00 0.72 0.07 0.02 95.63 15:01:12 15:01:01 1 0.94 0.00 0.58 0.05 0.03 98.40 15:01:12 15:01:01 2 1.17 0.00 0.40 0.02 0.02 98.40 15:01:12 15:01:01 3 9.26 0.00 0.83 0.03 0.03 89.84 15:01:12 15:01:01 4 13.30 0.00 0.73 0.07 0.03 85.87 15:01:12 15:01:01 5 7.47 0.00 0.65 0.17 0.02 91.69 15:01:12 15:01:01 6 10.57 0.00 0.89 0.08 0.05 88.41 15:01:12 15:01:01 7 3.54 0.00 0.63 0.93 0.03 94.86 15:01:12 Average: all 11.64 0.00 2.33 1.89 0.06 84.08 15:01:12 Average: 0 11.69 0.00 2.27 0.62 0.05 85.38 15:01:12 Average: 1 10.56 0.00 2.50 0.37 0.05 86.52 15:01:12 Average: 2 8.01 0.00 2.25 1.44 0.07 88.23 15:01:12 Average: 3 14.06 0.00 2.42 3.96 0.07 79.49 15:01:12 Average: 4 11.56 0.00 1.98 0.83 0.06 85.57 15:01:12 Average: 5 13.66 0.00 2.31 1.22 0.06 82.76 15:01:12 Average: 6 13.84 0.00 2.85 3.35 0.07 79.89 15:01:12 Average: 7 9.74 0.00 2.08 3.31 0.05 84.82 15:01:12 15:01:12 15:01:12